Compare commits

...

1298 Commits

Author SHA1 Message Date
1a59e4dacd replace download source 2023-08-10 01:28:42 +08:00
AUTOMATIC1111
68f336bd99 Merge branch 'release_candidate' 2023-07-27 09:02:22 +03:00
AUTOMATIC1111
50973ec77c update the changelog 2023-07-27 09:02:02 +03:00
AUTOMATIC1111
f82e08cf45 update lora extension to work with python 3.8 2023-07-27 09:00:59 +03:00
AUTOMATIC1111
3039925b27 update readme 2023-07-26 15:19:02 +03:00
AUTOMATIC1111
8220cf37da Merge pull request #12020 from Littleor/dev
Fix the error in rendering the name and description in the extra network UI.
2023-07-26 15:18:04 +03:00
AUTOMATIC1111
055461ae41 repair SDXL 2023-07-26 15:08:12 +03:00
AUTOMATIC1111
5c8f91b229 fix autograd which i broke for no good reason when implementing SDXL 2023-07-26 13:04:10 +03:00
AUTOMATIC1111
6b877c35da Merge pull request #12032 from AUTOMATIC1111/fix-api-get-options-sd_model_checkpoint
api /sdapi/v1/options use "Any" type when default type is None
2023-07-26 11:52:58 +03:00
AUTOMATIC1111
eb6d330bb7 delete scale checker script due to user demand 2023-07-26 09:20:02 +03:00
AUTOMATIC1111
5360ae2cc5 Merge pull request #12023 from AUTOMATIC1111/create_infotext_fix
Create infotext fix
2023-07-26 08:10:21 +03:00
AUTOMATIC1111
e16eb3d0cb Merge pull request #12024 from AUTOMATIC1111/fix-check-for-updates-status-always-unknown-
fix check for updates status always "unknown"
2023-07-26 08:10:12 +03:00
AUTOMATIC1111
99ef3b6c52 update readme 2023-07-25 16:31:01 +03:00
AUTOMATIC1111
65b6f8d3d5 fix for #11963 2023-07-25 16:20:55 +03:00
AUTOMATIC1111
b57a816038 Merge pull request #11963 from catboxanon/fix/lora-te
Fix parsing text encoder blocks in some LoRAs
2023-07-25 16:20:52 +03:00
AUTOMATIC1111
11f996a096 Merge pull request #11979 from AUTOMATIC1111/catch-exception-for-non-git-extensions
catch exception for non git extensions
2023-07-25 16:20:49 +03:00
AUTOMATIC1111
ce0aab3643 Merge pull request #11984 from AUTOMATIC1111/api-only-subpath-(root_path)
api only subpath (rootpath)
2023-07-25 16:20:46 +03:00
AUTOMATIC1111
c251e8db8d Merge pull request #11957 from ljleb/pp-batch-list
Add postprocess_batch_list script callback
2023-07-25 16:20:33 +03:00
AUTOMATIC1111
284822323a restyle Startup profile for black users 2023-07-25 16:20:16 +03:00
AUTOMATIC1111
1f59be5188 Merge pull request #11926 from wfjsw/fix-env-get-1
fix 11291#issuecomment-1646547908
2023-07-25 16:20:07 +03:00
AUTOMATIC1111
cad87bf4e3 Merge pull request #11927 from ljleb/fix-AND
Fix composable diffusion weight parsing
2023-07-25 16:20:01 +03:00
AUTOMATIC1111
704628b903 Merge pull request #11923 from AnyISalIn/dev
[bug] If txt2img/img2img raises an exception, finally call state.end()
2023-07-25 16:19:36 +03:00
AUTOMATIC1111
636ff513b0 Merge pull request #11920 from wfjsw/typo-fix-1
typo fix
2023-07-25 16:19:22 +03:00
AUTOMATIC1111
51206edb62 Merge pull request #11921 from wfjsw/prepend-pythonpath
prepend the pythonpath instead of overriding it
2023-07-25 16:19:08 +03:00
AUTOMATIC1111
c5934fb6e3 Merge pull request #11878 from Bourne-M/patch-1
【bug】reload altclip model error
2023-07-25 16:18:55 +03:00
AUTOMATIC1111
a3ddf464a2 Merge branch 'release_candidate' 2023-07-25 08:18:02 +03:00
AUTOMATIC1111
2c11e9009e repair --medvram for SD2.x too after SDXL update 2023-07-24 11:57:59 +03:00
AUTOMATIC1111
1f26815dd3
Merge pull request #11898 from janekm/janekm-patch-1
Update sd_models_xl.py
2023-07-20 19:16:40 +03:00
Janek Mann
8218f6cd37
Update sd_models_xl.py
Fix width/height not getting fed into the conditioning
2023-07-20 16:22:52 +01:00
AUTOMATIC1111
23c947ab03 automatically switch to 32-bit float VAE if the generated picture has NaNs. 2023-07-19 20:23:30 +03:00
AUTOMATIC1111
0e47c36a28 Merge branch 'dev' into release_candidate 2023-07-19 15:50:49 +03:00
AUTOMATIC1111
4334d25978 bugfix: model name was added together with directory name to infotext and to [model_name] filename pattern 2023-07-19 15:49:54 +03:00
AUTOMATIC1111
05ccb4d0e3 bugfix: model name was added together with directory name to infotext and to [model_name] filename pattern 2023-07-19 15:49:31 +03:00
AUTOMATIC1111
d5c850aab5
Merge pull request #11866 from kopyl/allow-no-venv-install
Make possible to install web ui without venv with venv_dir=- env variable for Linux
2023-07-19 08:00:05 +03:00
AUTOMATIC1111
0a334b447f
Merge branch 'dev' into allow-no-venv-install 2023-07-19 07:59:39 +03:00
AUTOMATIC1111
c2b9754857
Merge pull request #11869 from AUTOMATIC1111/missing-p-save_image-before-highres-fix
Fix missing p save_image before-highres-fix
2023-07-19 07:58:34 +03:00
w-e-w
c8b55f29e2 missing p save_image before-highres-fix 2023-07-19 08:27:19 +09:00
kopyl
6094310704 improve var naming 2023-07-19 01:48:21 +03:00
kopyl
0c4ca5f43e Replace argument with env variable 2023-07-19 01:47:39 +03:00
AUTOMATIC1111
b010eea520 fix incorrect multiplier for Loras 2023-07-19 00:41:00 +03:00
kopyl
2b42f73e3d Make possible to install web ui without venv with --novenv flag
When passing `--novenv` flag to webui.sh it can skip venv.
Might be useful for installing in Docker since messing with venv in Docker might be a bit complicated.

Example usage:
`webui.sh --novenv`

Hope this gets approved and pushed into future versions of Web UI
2023-07-18 22:43:18 +03:00
AUTOMATIC1111
136c8859a4 add backwards compatibility --lyco-dir-backcompat option, use that for LyCORIS directory instead of hardcoded value
prevent running preload.py for disabled extensions
2023-07-18 20:11:30 +03:00
AUTOMATIC1111
eb7c9b58fc Merge branch 'dev' into release_candidate 2023-07-18 18:20:22 +03:00
AUTOMATIC1111
7f7db1700b linter fix 2023-07-18 18:16:23 +03:00
AUTOMATIC1111
b270ded268 fix the issue with /sdapi/v1/options failing (this time for sure!)
fix automated tests downloading CLIP model
2023-07-18 18:10:04 +03:00
AUTOMATIC1111
be16d274f8 changelog for 1.5.0 2023-07-18 17:44:56 +03:00
AUTOMATIC1111
66c5f1bb15 return sd_model_checkpoint to None 2023-07-18 17:41:37 +03:00
AUTOMATIC1111
4b5a63aa11 add a bit more metadata info for the lora user metadata page 2023-07-18 17:32:46 +03:00
AUTOMATIC1111
ed82f1c5f1 lint 2023-07-18 15:55:23 +03:00
AUTOMATIC1111
420cc8f68e also make None a valid option for options API for #11854 2023-07-18 11:48:40 +03:00
AUTOMATIC1111
6be5ccb530
Merge pull request #11854 from leon0707/fix-11805
Fix #11805
2023-07-18 11:48:01 +03:00
Leon Feng
a3730bd9be
Merge branch 'dev' into fix-11805 2023-07-18 04:24:14 -04:00
Leon Feng
d6668347c8 remove duplicate 2023-07-18 04:19:58 -04:00
AUTOMATIC1111
871b8687a8
Merge pull request #11846 from brkirch/sd-xl-upcast-sampling-fix
Add support for using `--upcast-sampling` with SD XL
2023-07-18 08:08:19 +03:00
AUTOMATIC1111
20c41364cc
Merge pull request #11843 from KohakuBlueleaf/fix-lyco-support
Fix wrong key name in lokr module
2023-07-18 08:05:28 +03:00
brkirch
f0e2098f1a Add support for --upcast-sampling with SD XL 2023-07-18 00:39:50 -04:00
Kohaku-Blueleaf
3d31caf4a5
use "is not None" for Tensor 2023-07-18 10:45:42 +08:00
Kohaku-Blueleaf
17e14ed2d9
Fix wrong key name in lokr module 2023-07-18 10:23:41 +08:00
AUTOMATIC1111
a99d5708e6 skip installing packages with pip if theyare already installed
record time it took to launch
2023-07-17 20:10:24 +03:00
AUTOMATIC1111
699108bfbb hide cards for networks of incompatible stable diffusion version in Lora extra networks interface 2023-07-17 18:56:22 +03:00
AUTOMATIC1111
f97e35929b
Merge pull request #11824 from AUTOMATIC1111/XYZ-always_discard_next_to_last_sigma
XYZ always_discard_next_to_last_sigma
2023-07-17 15:56:34 +03:00
AUTOMATIC1111
2164578738
Merge pull request #11821 from AUTOMATIC1111/lora_lyco
lora extension rework to include other types of networks
2023-07-17 15:51:59 +03:00
AUTOMATIC1111
05d23c7837 move generate button below the picture for mobile clients 2023-07-17 11:44:29 +03:00
AUTOMATIC1111
35510f7529 add alias to lyco network
read networks from LyCORIS dir if it exists
add credits
2023-07-17 10:06:02 +03:00
AUTOMATIC1111
9251ae3bc7 delay writing cache to prevent writing the same thing over and over 2023-07-17 09:29:36 +03:00
AUTOMATIC1111
2e07a8ae6b some backwards compatibility
linter
2023-07-17 09:05:18 +03:00
AUTOMATIC1111
238adeaffb support specifying te and unet weights separately
update lora code
support full module
2023-07-17 09:00:47 +03:00
w-e-w
8941297ceb lowercase 2023-07-17 12:45:38 +09:00
w-e-w
c03856bfdf reversible boolean_choice order 2023-07-17 12:45:10 +09:00
w-e-w
7870937c77 XYZ always_discard_next_to_last_sigma
Co-authored-by: Franck Mahon <franck.mahon@gmail.com>
2023-07-17 12:25:29 +09:00
AUTOMATIC1111
46466f09d0 Lokr support 2023-07-17 00:29:07 +03:00
AUTOMATIC1111
58c3df32f3 IA3 support 2023-07-17 00:12:18 +03:00
AUTOMATIC1111
ef5dac7786 fix 2023-07-17 00:01:17 +03:00
AUTOMATIC1111
c2297b89d3 linter 2023-07-16 23:14:57 +03:00
AUTOMATIC1111
b75b004fe6 lora extension rework to include other types of networks 2023-07-16 23:13:55 +03:00
AUTOMATIC1111
7d26c479ee changelog for future 1.5.0 2023-07-16 14:39:47 +03:00
AUTOMATIC1111
67ea4eabc3 fix cache loading wrong entries from old cache files 2023-07-16 13:46:33 +03:00
AUTOMATIC1111
ace0c78373
Merge pull request #11669 from gitama2023/patch-1
Added a prompt for users using poor scaling
2023-07-16 13:12:18 +03:00
AUTOMATIC1111
570f42afd1 possible fix for FP16 VAE failing in img2img SDXL 2023-07-16 12:28:50 +03:00
AUTOMATIC1111
0198eaec45
Merge pull request #11757 from AUTOMATIC1111/sdxl
SD XL support
2023-07-16 12:04:53 +03:00
AUTOMATIC1111
9d3dd64fe9 minor restyling for extra networks 2023-07-16 10:44:04 +03:00
AUTOMATIC1111
690d56f3c1 nuke thumbs extra networks view mode (use settings tab to change width/height/scale to get thumbs) 2023-07-16 10:25:34 +03:00
AUTOMATIC1111
7b052eb70e add resolution calculation from buckets for lora user metadata page 2023-07-16 10:07:02 +03:00
AUTOMATIC1111
ccd97886da fix bogus metadata for extra networks appearing out of cache
fix description editing for checkpoint not immediately appearing on cards
2023-07-16 09:49:34 +03:00
AUTOMATIC1111
f71630edb3
Merge pull request #11794 from MarcusAdams/none-filename-token
Added [none] filename token.
2023-07-16 09:27:28 +03:00
AUTOMATIC1111
89c3e17c65
Merge pull request #11797 from wfjsw/ext-index-env
allow replacing extensions index with environment variable
2023-07-16 09:27:07 +03:00
AUTOMATIC1111
d2e64e26e5
Merge pull request #11802 from AUTOMATIC1111/warns-merge-into-master
Warns merge into master
2023-07-16 09:26:47 +03:00
AUTOMATIC1111
57e4422bdb
Merge pull request #11806 from huchenlei/file_500
404 when thumb file not found
2023-07-16 09:26:07 +03:00
AUTOMATIC1111
47d9dd0240 speedup extra networks listing 2023-07-16 09:25:32 +03:00
AUTOMATIC1111
a1d6ada69a allow refreshing single card after editing user metadata instead of all cards 2023-07-16 08:38:23 +03:00
huchenlei
8c11b126e5 404 when thumb file not found 2023-07-15 23:51:18 -04:00
Leon Feng
d380f939b5
Update shared.py 2023-07-15 23:31:59 -04:00
AUTOMATIC1111
efceed8c7f fix styles for dark people 2023-07-16 01:09:19 +03:00
AUTOMATIC1111
11f339733d add lora user metadata editor dialog inspired by MrKuenning's mockup from #7458 2023-07-16 00:57:45 +03:00
AUTOMATIC1111
5decbf184b eslint 2023-07-15 21:05:33 +03:00
AUTOMATIC1111
e5d3ae2bf4 user metadata system for custom networks 2023-07-15 20:39:10 +03:00
w-e-w
2970d712ee Warns merge into master 2023-07-16 02:29:20 +09:00
Jabasukuriputo Wang
2d9d53be21
allow replacing extensions index with environment variable 2023-07-15 17:09:51 +08:00
AUTOMATIC1111
c58cf73c80 remove "## " from changelog.md version 2023-07-15 09:33:21 +03:00
AUTOMATIC1111
0aa8d538e1 suppress printing TI embedding into console by default 2023-07-15 09:24:22 +03:00
AUTOMATIC1111
510e5fc8c6 cache git extension repo information 2023-07-15 09:20:43 +03:00
AUTOMATIC1111
2b1bae0d75 add textual inversion hashes to infotext 2023-07-15 08:41:22 +03:00
AUTOMATIC1111
127635409a add padding and identification to generation log section (Failed to find Loras, Used embeddings, etc...) 2023-07-15 08:07:25 +03:00
AUTOMATIC1111
b8bd8ce4cf disable rich exception output in console for API by default, use WEBUI_RICH_EXCEPTIONS env var to enable 2023-07-15 07:44:37 +03:00
AUTOMATIC1111
14cf434bc3 fix an issue in live previews that happens when you use SDXL with fp16 VAE 2023-07-15 07:33:16 +03:00
Marcus Adams
5d94088eac Added [none] filename token. 2023-07-14 21:52:00 -04:00
AUTOMATIC1111
95ee0cb188 restyle time taken/VRAM display 2023-07-14 22:51:58 +03:00
AUTOMATIC1111
5dee0fa1f8 add a message about unsupported samplers 2023-07-14 21:41:21 +03:00
AUTOMATIC1111
ac2d47ff4c add cheap VAE approximation coeffs for SDXL 2023-07-14 20:27:41 +03:00
AUTOMATIC1111
471a5a66b7 add more relevant fields to caching conds 2023-07-14 17:54:09 +03:00
AUTOMATIC1111
92a3236161 Merge branch 'dev' into sdxl 2023-07-14 10:12:48 +03:00
AUTOMATIC1111
9893d09b43
Merge pull request #11779 from AUTOMATIC1111/do-not-run-twice
Do not run git workflows twice for PRs from this repo
2023-07-14 10:09:20 +03:00
AUTOMATIC1111
62e3263467 edit names more 2023-07-14 10:07:08 +03:00
AUTOMATIC1111
9a3f35b028 repair medvram and lowvram 2023-07-14 09:56:01 +03:00
AUTOMATIC1111
714c920c20 do not run workflow items twice for PRs from this repo
update names
2023-07-14 09:47:44 +03:00
AUTOMATIC1111
abb948dab0 raise maximum Negative Guidance minimum sigma due to request in PR discussion 2023-07-14 09:28:01 +03:00
AUTOMATIC1111
b7dbeda0d9 linter 2023-07-14 09:19:08 +03:00
AUTOMATIC1111
6d8dcdefa0 initial SDXL refiner support 2023-07-14 09:16:01 +03:00
AUTOMATIC1111
073e30ee15
Merge pull request #11775 from AUTOMATIC1111/handles-model-hash-cache.json-error
handles model hash cache.json error
2023-07-14 00:18:17 +03:00
w-e-w
a3db187e4f handles model hash cache.json error 2023-07-14 05:48:14 +09:00
AUTOMATIC1111
dc39061856 thank you linter 2023-07-13 21:19:41 +03:00
AUTOMATIC1111
6c5f83b19b add support for SDXL loras with te1/te2 modules 2023-07-13 21:17:50 +03:00
AUTOMATIC1111
ff73841c60 mute SDXL imports in the place there SDXL is imported for the first time instead of launch.py 2023-07-13 17:42:16 +03:00
AUTOMATIC1111
e16ebc917d repair --no-half for SDXL 2023-07-13 17:32:35 +03:00
AUTOMATIC1111
b8159d0919 add XL support for live previews: approx and TAESD 2023-07-13 17:24:54 +03:00
AUTOMATIC1111
6f23da603d fix broken img2img 2023-07-13 16:18:39 +03:00
AUTOMATIC1111
066d5edf17
Merge pull request #11730 from tangjicheng46/master
fix: timeout_keep_alive_handler error
2023-07-13 15:21:50 +03:00
AUTOMATIC1111
b7c5b30f14
Merge branch 'dev' into master 2023-07-13 15:21:39 +03:00
AUTOMATIC1111
262ec8ecda
Merge pull request #11707 from wfjsw/revert-11244
Revert #11244
2023-07-13 14:51:04 +03:00
AUTOMATIC1111
ed0512c76f
Merge pull request #11747 from AUTOMATIC1111/img2img-save
Save img2img batch with images.save_image()
2023-07-13 14:50:08 +03:00
AUTOMATIC1111
cc0a3cc492
Merge pull request #11750 from AUTOMATIC1111/quick-settings-textbox
Use submit and blur for quick settings textbox
2023-07-13 14:49:48 +03:00
AUTOMATIC1111
e93f582a78
Merge pull request #11748 from huaizong/fix/x/resize-less-than-two-pixel-error
fix: check fill size none zero when resize  (fixes #11425)
2023-07-13 14:48:19 +03:00
AUTOMATIC1111
76ebb175ca lora support 2023-07-13 12:59:31 +03:00
AUTOMATIC1111
594c8e7b26 fix CLIP doing the unneeded normalization
revert SD2.1 back to use the original repo
add SDXL's force_zero_embeddings to negative prompt
2023-07-13 11:35:52 +03:00
AUTOMATIC1111
21aec6f567 lint 2023-07-13 09:38:54 +03:00
AUTOMATIC1111
ac4ccfa136 get attention optimizations to work 2023-07-13 09:30:33 +03:00
AUTOMATIC1111
b717eb7e56 mute unneeded SDXL imports for tests too 2023-07-13 08:29:37 +03:00
AUTOMATIC1111
a04c955121 fix importlib.machinery issue on github's autotests #yolo 2023-07-13 00:12:25 +03:00
AUTOMATIC1111
5cf623c58e linter 2023-07-13 00:08:19 +03:00
AUTOMATIC1111
60397a7800 Merge branch 'dev' into sdxl 2023-07-12 23:53:26 +03:00
AUTOMATIC1111
da464a3fb3 SDXL support 2023-07-12 23:52:43 +03:00
w-e-w
ea49bb0612 use submit blur for quick settings textbox 2023-07-12 23:30:22 +09:00
AUTOMATIC1111
e5ca987778
Merge pull request #11749 from akx/mps-gc-fix-2
Don't do MPS GC when there's a latent
2023-07-12 16:57:07 +03:00
Aarni Koskela
3d524fd3f1 Don't do MPS GC when there's a latent that could still be sampled 2023-07-12 15:17:30 +03:00
Aarni Koskela
8f6b24ce59 Add correct logger name 2023-07-12 15:16:42 +03:00
missionfloyd
e0218c4f22
Merge branch 'dev' into img2img-save 2023-07-12 02:57:57 -06:00
王怀宗
6c0d5d1198 fix: check fill size none zero when resize (fixes #11425) 2023-07-12 16:51:50 +08:00
missionfloyd
3fee3c34f1
Save img2img batch with images.save_image() 2023-07-12 02:45:03 -06:00
AUTOMATIC1111
af081211ee getting SD2.1 to run on SDXL repo 2023-07-11 21:16:43 +03:00
AUTOMATIC1111
15adff3d6d
Merge pull request #11733 from akx/brace-for-impact
Allow using alt in the prompt fields again
2023-07-11 15:25:59 +03:00
Aarni Koskela
3636c2c6ed Allow using alt in the prompt fields again 2023-07-11 15:05:20 +03:00
AUTOMATIC1111
799760ab95
Merge pull request #11722 from akx/mps-gc-fix
Fix MPS cache cleanup
2023-07-11 13:49:02 +03:00
Aarni Koskela
b85fc7187d Fix MPS cache cleanup
Importing torch does not import torch.mps so the call failed.
2023-07-11 12:51:05 +03:00
TangJicheng
14501f56aa
set timeout_keep_alive 2023-07-11 17:32:04 +09:00
TangJicheng
10d4e4ace2
add cmd_args: --timeout-keep-alive 2023-07-11 17:30:57 +09:00
AUTOMATIC1111
7b833291b3 Merge branch 'master' into dev 2023-07-11 06:25:50 +03:00
AUTOMATIC1111
f865d3e116 add changelog for 1.4.1 2023-07-11 06:23:52 +03:00
AUTOMATIC1111
910d4f61e5
Merge pull request #11720 from akx/closing
Use closing() with processing classes everywhere
2023-07-10 20:41:09 +03:00
AUTOMATIC1111
8d0078b6ef
Merge pull request #11718 from tangjicheng46/master
fix: add queue lock for refresh-checkpoints
2023-07-10 20:40:58 +03:00
Aarni Koskela
44c27ebc73 Use closing() with processing classes everywhere
Follows up on #11569
2023-07-10 20:08:23 +03:00
tangjicheng
089a0022ae add queue lock for refresh-checkpoints 2023-07-10 23:10:14 +09:00
wfjsw
75f56406ce Revert Pull Request #11244
Revert "Add github mirror for the download extension"

This reverts commit 9ec2ba2d28.

Revert "Update code style"

This reverts commit de022c4c80.

Revert "Update call method"

This reverts commit e9bd18c57b.

Revert "move github proxy to settings, System page."

This reverts commit 4981c7d370.
2023-07-09 22:42:00 +08:00
AUTOMATIC1111
bcb6ad5fab
Merge pull request #11696 from WuSiYu/feat_SWIN_torch_compile
feat: add option SWIN_torch_compile to accelerate SwinIR upscale
2023-07-08 23:05:17 +03:00
SiYu Wu
44d66daaad add option SWIN_torch_compile to accelerate SwinIR upscale using torch.compile() 2023-07-09 03:27:33 +08:00
AUTOMATIC1111
7dcdf81b84
Merge pull request #11595 from akx/alisases
Fix typo: checkpoint_alisases
2023-07-08 17:53:55 +03:00
AUTOMATIC1111
e3507a1be4 fix for eslint 2023-07-08 17:53:17 +03:00
AUTOMATIC1111
4981c7d370 move github proxy to settings, System page. 2023-07-08 17:52:03 +03:00
AUTOMATIC1111
ee642a2ff4
Merge pull request #11244 from MaiXiaoMeng/dev
Add github mirror for the download extension
2023-07-08 17:38:29 +03:00
AUTOMATIC1111
4da92281f6 pin version for torch for Navi3 according to comment from #11228 2023-07-08 17:29:28 +03:00
Aarni Koskela
da468a585b Fix typo: checkpoint_alisases 2023-07-08 17:28:42 +03:00
AUTOMATIC1111
ed855783ed
Merge pull request #11228 from Beinsezii/dev
WEBUI.SH Navi 3 Support
2023-07-08 17:28:04 +03:00
AUTOMATIC1111
386f78035b
Merge pull request #11672 from nelsonjchen/patch-1
Add a link to an index-able/crawl-able wiki mirroring service of the wiki
2023-07-08 17:21:05 +03:00
AUTOMATIC1111
da8916f926 added torch.mps.empty_cache() to torch_gc()
changed a bunch of places that use torch.cuda.empty_cache() to use torch_gc() instead
2023-07-08 17:13:18 +03:00
AUTOMATIC1111
e161b5a025 rework #10436 to use shared.walk_files 2023-07-08 16:54:03 +03:00
AUTOMATIC1111
353031a014
Merge pull request #10436 from lenankamp/patch-1
Recursive batch img2img.py
2023-07-08 16:50:54 +03:00
AUTOMATIC1111
993dd9a892
Merge branch 'dev' into patch-1 2023-07-08 16:50:23 +03:00
AUTOMATIC1111
d7d6e8cfc8 use natural sort for shared.walk_files and shared.listfiles, as well as for dirs in extra networks 2023-07-08 16:45:59 +03:00
AUTOMATIC1111
7a6abc59ea for #10650: change key to alt+arrows, enable by default 2023-07-08 16:15:28 +03:00
AUTOMATIC1111
12a29a677a
Merge pull request #10650 from missionfloyd/reorder-hotkeys
Hotkeys to move prompt elements
2023-07-08 16:12:01 +03:00
AUTOMATIC1111
274a3e21ba small rework for img2img PNG info 2023-07-08 15:42:00 +03:00
AUTOMATIC1111
1d71c36de2 third time's the charm 2023-07-08 15:21:29 +03:00
AUTOMATIC1111
9043b91649 additional changes for merge conflict for #11337 2023-07-08 15:14:24 +03:00
AUTOMATIC1111
b88645d9eb additional changes for merge conflict for #11337 2023-07-08 15:14:14 +03:00
AUTOMATIC1111
b0419b60a0
Merge pull request #11337 from FWeynschenk/img2img-batch-png-info
Img2img batch png info
2023-07-08 15:10:33 +03:00
AUTOMATIC1111
ec9bbda3da
Merge branch 'dev' into img2img-batch-png-info 2023-07-08 15:10:10 +03:00
AUTOMATIC1111
18256c5f01 fix for #11478 2023-07-08 14:58:33 +03:00
AUTOMATIC1111
211c3398f6
Merge pull request #11478 from catalpaaa/subpath
Fixing --subpath on newer gradio version
2023-07-08 14:53:42 +03:00
AUTOMATIC1111
539518292e
Merge pull request #11468 from NoCrypt/grid-colors-options
Add options to change colors in grid
2023-07-08 14:51:50 +03:00
AUTOMATIC1111
f0c62688d2
Merge pull request #11488 from AUTOMATIC1111/callback-after_extra_networks_activate
add callback after_extra_networks_activate
2023-07-08 14:50:11 +03:00
AUTOMATIC1111
3602602260 whitespace for #11477 2023-07-08 14:44:02 +03:00
AUTOMATIC1111
53924aeaf0
Merge pull request #11477 from hako-mikan/master
add `before_hr` script callback
2023-07-08 14:43:06 +03:00
AUTOMATIC1111
953147bf6b
Merge pull request #11495 from missionfloyd/end-paren-fix
Correctly remove end parenthesis with ctrl+up/down
2023-07-08 14:41:33 +03:00
AUTOMATIC1111
eb51acb89e
Merge pull request #11503 from AUTOMATIC1111/rename---add-stop-route-to---api-server-stop
Rename --add-stop-route to --api-server-stop
2023-07-08 14:40:21 +03:00
AUTOMATIC1111
6acc4cd7e1
Merge pull request #11513 from Akegarasu/dev
fix can't get current hash
2023-07-08 14:39:52 +03:00
AUTOMATIC1111
b25925c95b
Merge pull request #11520 from AUTOMATIC1111/extension-metadata
Extension metadata
2023-07-08 14:30:17 +03:00
AUTOMATIC1111
b74f661ed9
Merge pull request #11529 from hunshcn/sync-weight
sync default value of process_focal_crop_entropy_weight between ui and api
2023-07-08 14:24:48 +03:00
AUTOMATIC1111
7a7fa25d02 lint fix for #11492 2023-07-08 14:21:40 +03:00
AUTOMATIC1111
d78377ea5d
Merge pull request #11593 from akx/better-status-reporting-1
Better status reporting, part 1
2023-07-08 14:20:28 +03:00
AUTOMATIC1111
fc049a2fd3
Merge branch 'dev' into better-status-reporting-1 2023-07-08 14:19:34 +03:00
AUTOMATIC1111
ae74b44c69
Merge pull request #11596 from akx/use-read-info
postprocessing: use read_info_from_image
2023-07-08 14:14:12 +03:00
AUTOMATIC1111
9be8903ca9
Merge pull request #11567 from AUTOMATIC1111/seed_resize_to_0
Don't add "Seed Resize: -1x-1" to API image metadata
2023-07-08 13:58:31 +03:00
AUTOMATIC1111
e338f4142f
Merge pull request #11592 from onyasumi/launchscript-directory
Fixed launch script to be runnable from any directory
2023-07-08 13:57:01 +03:00
AUTOMATIC1111
3a294a08bc
Merge pull request #11535 from gshawn3/bugfix/11534
fix for #11534: canvas zoom and pan extension hijacking shortcut keys
2023-07-08 13:48:58 +03:00
AUTOMATIC1111
d12ccb91a8
Merge pull request #11631 from AUTOMATIC1111/gif-preview
Allow gif for extra network previews
2023-07-08 13:47:57 +03:00
AUTOMATIC1111
2151a9881f
Merge pull request #11492 from semjon00/dev
Fix throwing exception when trying to resize image with I;16 mode
2023-07-08 13:46:08 +03:00
AUTOMATIC1111
19772c3c97 fix problem with extra network saving images as previews losing generation info
add a description for save_image_with_geninfo
2023-07-08 13:43:42 +03:00
AUTOMATIC1111
16045d0877
Merge pull request #11637 from Hao-Wu/fix-has-mps-deprecated
Fix warning of 'has_mps' deprecated from PyTorch
2023-07-08 13:11:52 +03:00
AUTOMATIC1111
5ed1ae5003
Merge pull request #11656 from jovijovi/dev
fix(api): convert to "RGB" if image mode is "RGBA" #11655
2023-07-08 13:10:50 +03:00
AUTOMATIC1111
46c2b1e202
Merge pull request #11660 from neilmahaseth/patch-1
Fix UnicodeEncodeError when writing to file CLIP Interrogator Batch Mode
2023-07-08 13:10:03 +03:00
AUTOMATIC1111
7348440524
Merge pull request #11569 from ramyma/hotfix-api-cache
Hotfix: API cache cleanup
2023-07-08 13:09:20 +03:00
Nelson Chen
a369a0cf65
Add a link to an index-able/crawl-able wiki mirroring service of the wiki
At the moment, the wiki is editable by GitHub users, so it is blocked from indexing. If you are searching for something related to this repo, Google and other search engines will not use the content for it.

This link hack just sticks a link on the README so search engines may prioritize it. At the moment, 0 pages from GitHub are index and only 7 pages from my proxy service are. If you add this, the rest should get indexed.

An indexable page looks like this: https://github-wiki-see.page/m/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings. It is not meant to be read, just indexed, and users are expected to click through to the GitHub copy.

https://github-wiki-see.page/ has more information about the situation. I built the tool and I'm happy to answer any questions I can.

Similar: https://github.com/MiSTer-devel/Main_MiSTer#main_mister-main-binary-and-wiki-repo:~:text=For%20the%20purposes%20of%20getting%20google%20to%20crawl%20the%20wiki%2C%20here%27s%20a%20link%20to%20the%20(not%20for%20humans)%20crawlable%20wiki
2023-07-07 09:04:49 -07:00
gitama2023
f439179641
Added a prompt for users using poor scaling
Added a JavaScript file that detects browser scaling and prompts users when scale is not 100%
2023-07-07 16:18:01 +08:00
Neil Mahseth
c258dd34a8
Fix UnicodeEncodeError when writing to file CLIP Interrogator Batch Mode
The code snippet print(interrogation_function(img), file=open(os.path.join(ii_output_dir, f"{left}.txt"), 'a')) raises a UnicodeEncodeError with the message "'charmap' codec can't encode character '\u016b' in position 129". This error occurs because the default encoding used by the open() function cannot handle certain Unicode characters.

To fix this issue, the encoding parameter needs to be explicitly specified when opening the file. By using an appropriate encoding, such as 'utf-8', we can ensure that Unicode characters are properly encoded and written to the file.

The updated code should be modified as follows:

python
Copy code
print(interrogation_function(img), file=open(os.path.join(ii_output_dir, f"{left}.txt"), 'a', encoding='utf-8'))
By making this change, the code will no longer raise the UnicodeEncodeError and will correctly handle Unicode characters during the file write operation.
2023-07-06 22:02:47 +05:30
jovijovi
259967b7c6 fix(api): convert to "RGB" if image mode is "RGBA" 2023-07-06 18:43:17 +08:00
Hao-Wu
daf41a2734 Fix warning of 'has_mps' is deprecated from PyTorch 2023-07-06 15:37:10 +08:00
semjon00
fb661e089f Fix throwing exception when trying to resize image with I;16 mode 2023-07-05 15:39:04 +03:00
missionfloyd
c602471b85
Allow gif for extra network previews 2023-07-05 03:19:26 -06:00
Danil Boldyrev
f325783abd made the blur function optional, added exclusion buttons 2023-07-04 22:26:43 +03:00
missionfloyd
f731a728c6
Check seed_resize_from <= 0 2023-07-03 11:41:10 -06:00
ramyma
c1c0492859 Use contextlib for closing the generation process 2023-07-03 20:17:47 +03:00
ramyma
3278887317 Handle cleanup in case there's an exception thrown 2023-07-03 20:02:30 +03:00
Aarni Koskela
5c6a33b3e1 read_info_from_image: don't mutate info in passed-in image 2023-07-03 13:10:42 +03:00
Aarni Koskela
96f0593c8f read_info_from_image: add type 2023-07-03 13:10:20 +03:00
Aarni Koskela
b2c574891f read_info_from_image: add photoshop to ignored 2023-07-03 13:09:37 +03:00
Aarni Koskela
08f9b705cd Use read_info_from_image in postprocessing
Avoids bad keys such as `exif` ending up in the "PNG info" passed forward
2023-07-03 13:08:28 +03:00
Aarni Koskela
522a8b9f62 Add a status logger in modules.shared 2023-07-03 11:07:57 +03:00
Aarni Koskela
e430344347 API: use finally: for state.end() 2023-07-03 11:03:41 +03:00
Aarni Koskela
f44feb6a10 Add job argument to State.begin() 2023-07-03 11:03:41 +03:00
Aarni Koskela
b70001e618 Add SD_WEBUI_LOG_LEVEL envvar 2023-07-03 11:03:41 +03:00
Frank Tao
e33e2c5175
Update webui.sh 2023-07-03 03:17:27 -04:00
onyasumi
5a32d4fcb1 Fix launch script to be runnable from any directory 2023-07-03 07:15:19 +00:00
Danil Boldyrev
8519d52ef5 fixing the copy/paste function, correct code 2023-07-02 19:20:49 +03:00
ramyma
74d001bc68 Hotfix: call processing close to cleanup API generation calls 2023-07-02 04:59:59 +03:00
missionfloyd
7f46f81dd7
Change default seed_resize to 0 2023-07-01 17:20:56 -06:00
gshawn3
8a07c59baa fix for #11534: canvas zoom and pan extension hijacking shortcut keys 2023-06-30 03:49:26 -07:00
w-e-w
2ccc832b33 add extensions Update Created dates with sorting 2023-06-29 22:46:59 +09:00
Akiba
0416a7bfba
fix can't get current hash 2023-06-29 18:46:52 +08:00
w-e-w
b1c6e39620 starts left 2023-06-29 19:25:34 +09:00
w-e-w
d47324b898 add stars 2023-06-29 19:25:18 +09:00
hunshcn
0bc0e652a3 sync default value of process_focal_crop_entropy_weight between ui and api 2023-06-29 18:12:55 +08:00
w-e-w
cc9c171978 rename --add-stop-route to --api-server-stop 2023-06-29 14:21:28 +09:00
missionfloyd
0b0767939d Correctly remove end parenthesis with ctrl+up/down 2023-06-28 17:51:27 -06:00
w-e-w
9c2a7f1e8b add callback after_extra_networks_activate 2023-06-29 02:08:21 +09:00
NoCrypt
f74fb50495 Move change colors options to Saving images/grids 2023-06-28 20:24:57 +07:00
NoCrypt
d22eb8a17f Fix lint 2023-06-28 17:57:34 +07:00
NoCrypt
45ab7475d6 Revision 2023-06-28 17:55:58 +07:00
catalpaaa
24d4475bdb fixing --subpath on newer gradio version 2023-06-28 03:15:01 -07:00
hako-mikan
b0ec69b360
add 'before_hr callback' script callback 2023-06-28 18:37:08 +09:00
NoCrypt
da14f6a663 Add options to change colors in grid 2023-06-28 10:16:44 +07:00
Beinsezii
9d8af4bd6a
Merge branch 'AUTOMATIC1111:dev' into dev 2023-06-27 15:29:47 -07:00
AUTOMATIC1111
fab73f2e7d
Merge pull request #11325 from stablegeniusdiffuser/dev-batch-grid-metadata
Add parameter to differentiate between batch run grids or ordinary images to write proper metadata
2023-06-27 14:23:39 +03:00
AUTOMATIC1111
1bf01b73f4
Merge pull request #11046 from akx/ded-code
Remove a bunch of unused/vestigial code
2023-06-27 11:25:55 +03:00
AUTOMATIC
d06af4e517 fix and rework #11113 2023-06-27 09:26:18 +03:00
AUTOMATIC1111
a96687682a
Merge pull request #11113 from stevensu1977/master
add model exists status check /sdapi/v1/options  #11112
2023-06-27 09:24:12 +03:00
AUTOMATIC1111
0b97ae2832
Merge branch 'dev' into master 2023-06-27 09:23:15 +03:00
AUTOMATIC1111
3cd4fd51ef
Merge pull request #10823 from akx/model-loady
Upscaler model loading cleanup
2023-06-27 09:20:49 +03:00
AUTOMATIC1111
d4f9250c5a
Merge pull request #11201 from akx/ruff-upg
Upgrade Ruff to 0.0.272
2023-06-27 09:19:55 +03:00
AUTOMATIC
24129368f1 send tensors to the correct device when loading from safetensors file with memmap disabled for #11260 2023-06-27 09:19:04 +03:00
AUTOMATIC1111
14196548c5
Merge pull request #11260 from dhwz/dev
fix very slow loading speed of .safetensors files
2023-06-27 09:11:08 +03:00
AUTOMATIC1111
d35e246111
Merge pull request #11227 from deckar01/10141-gradio-user-exif
Add Gradio User to Metadata
2023-06-27 09:06:03 +03:00
AUTOMATIC1111
4147fd6b2f
Merge branch 'dev' into 10141-gradio-user-exif 2023-06-27 09:05:53 +03:00
AUTOMATIC1111
bedcd2f377
Merge pull request #11264 from huchenlei/meta_class
🐛 Allow Script to have custom metaclass
2023-06-27 09:02:51 +03:00
AUTOMATIC1111
58a9a261c4
Merge branch 'dev' into meta_class 2023-06-27 09:02:38 +03:00
AUTOMATIC1111
2c43dd766d
Merge pull request #11226 from AUTOMATIC1111/git-clone-progress
show Git clone progress
2023-06-27 09:01:04 +03:00
AUTOMATIC
9bb1fcfad4 alternate fix for catch errors when retrieving extension index #11290 2023-06-27 08:59:35 +03:00
AUTOMATIC1111
fa31dd80f5
Merge pull request #11315 from guming3d/master
fix: adding elem_id for img2img resize to and resize by tabs
2023-06-27 08:53:10 +03:00
AUTOMATIC1111
2b247f3533
Merge pull request #11415 from netux/extensions-toggle-all
Add checkbox to check/uncheck all extensions in the Installed tab
2023-06-27 08:44:37 +03:00
AUTOMATIC1111
3e76ae5f50
Merge pull request #11146 from AUTOMATIC1111/api-quit-restart
api quit restart
2023-06-27 08:41:36 +03:00
AUTOMATIC
f005efae72 Merge branch 'master' into dev 2023-06-27 08:39:34 +03:00
AUTOMATIC
394ffa7b0a Merge branch 'release_candidate' 2023-06-27 08:38:14 +03:00
AUTOMATIC
6ac247317d Merge branch 'release_candidate' into dev 2023-06-27 08:37:46 +03:00
AUTOMATIC1111
dbc88c9645
Merge pull request #11189 from daswer123/dev
Zoom and pan: More options in the settings and improved error output
2023-06-27 08:34:51 +03:00
AUTOMATIC1111
cd7c03e1f6
Merge pull request #11136 from arch-fan/typo
fixed typos
2023-06-27 06:40:43 +03:00
AUTOMATIC1111
a9e7a3db3e
Merge pull request #11199 from akx/makedirs
Use os.makedirs(..., exist_ok=True)
2023-06-27 06:39:51 +03:00
AUTOMATIC1111
001cbd369d
Merge pull request #11294 from zhtttylz/Fix_Typo_of_hints.js
Fix Typo of hints.js
2023-06-27 06:35:22 +03:00
AUTOMATIC1111
820bbb5b7b
Merge pull request #11408 from wfjsw/patch-1
Strip whitespaces from URL and dirname prior to extension installation
2023-06-27 06:20:59 +03:00
AUTOMATIC
4bd490c28d add missing infotext entry for the pad cond/uncond option 2023-06-27 06:18:43 +03:00
Martín (Netux) Rodríguez
dd268c48c9 feat(extensions): add toggle all checkbox to Installed tab
Small QoL addition.

While there is the option to disable all extensions with the radio buttons at the top, that only acts as an added flag and doesn't really change the state of the extensions in the UI.

An use case for this checkbox is to disable all extensions except for a few, which is important for debugging extensions.
You could do that before, but you'd have to uncheck and recheck every extension one by one.
2023-06-25 00:48:46 -03:00
Jabasukuriputo Wang
d5a5f2f29f
Strip whitespaces from URL and dirname prior to extension installation
This avoid some cryptic errors brought by accidental spaces around urls
2023-06-25 01:31:02 +08:00
Ferdinand Weynschenk
c4c63dd5e4 resolve linter 2023-06-20 14:03:42 +02:00
Ferdinand Weynschenk
7ad48120d4 use ui params when retreiving png info fails
Don't want to interrupt the process since batches can be huge. This makes more sense to me than using the previous images parameters
2023-06-20 13:50:02 +02:00
Ferdinand Weynschenk
928bd42da4 PNG info support at img2img batch 2023-06-20 13:33:36 +02:00
stablegeniusdiffuser
27e9e3f6fa Add use_main_prompt parameter to use proper metadata for batch run grids or individual images 2023-06-19 20:36:44 +02:00
George Gu
d2ccdcdc97 fix: adding elem_id for img2img resize to and resize by tabs 2023-06-19 10:16:18 +08:00
zhtttylz
f7ae0e68c9 Fix Typo of hints.js 2023-06-18 16:42:39 +08:00
w-e-w
2e1710d88e update the description of --add-stop-rout 2023-06-18 14:07:41 +09:00
huchenlei
373ff5a217 🐛 Allow Script to have metaclass 2023-06-16 15:17:17 -04:00
dhwz
41363e0d27 fix very slow loading speed of .safetensors files 2023-06-16 18:10:15 +02:00
XiaoMeng Mai
e9bd18c57b Update call method 2023-06-16 00:09:54 +08:00
Jared Deckard
f603275d84 Add an opt-in infotext user name setting 2023-06-15 11:00:20 -05:00
Jared Deckard
8f18e67243 Add a user pattern to the filename generator 2023-06-15 11:00:11 -05:00
XiaoMeng Mai
de022c4c80 Update code style 2023-06-15 22:59:46 +08:00
XiaoMeng Mai
9ec2ba2d28 Add github mirror for the download extension 2023-06-15 22:43:09 +08:00
Jared Deckard
d3c86e5178 Note the Gradio user in the Exif data 2023-06-14 17:15:52 -05:00
Beinsezii
1d7c51fb9f WEBUI.SH Navi 3 Support
Navi 3 card now defaults to nightly torch to utilize rocm 5.5
for out-of-the-box support.

https://download.pytorch.org/whl/nightly/

While its not yet on the main pytorch "get started" site,
it still seems perfectly indexable via pip which is all we need.

With this I'm able to clone a fresh repo and immediately run ./webui.sh
on my 7900 XTX without any problems.
2023-06-14 13:07:22 -07:00
w-e-w
376f793bde git clone show progress 2023-06-15 04:23:52 +09:00
Jared Deckard
fa9d2ac2ff Fix gradio special args in the call queue 2023-06-14 13:53:13 -05:00
w-e-w
6091c4e4aa terminate -> stop 2023-06-14 19:53:08 +09:00
w-e-w
49fb2a3376 response 501 if not a able to restart 2023-06-14 19:52:12 +09:00
w-e-w
6387f0e85d update workflow kill test server 2023-06-14 18:51:54 +09:00
w-e-w
5be6c026f5 rename routes 2023-06-14 18:51:47 +09:00
Danil Boldyrev
3a41d7c551 Formatting code with Prettier 2023-06-14 00:31:36 +03:00
Danil Boldyrev
9b687f013d Reworked the disabling of functions, refactored part of the code 2023-06-14 00:24:25 +03:00
Aarni Koskela
d807164776 textual_inversion/logging.py: clean up duplicate key from sets (and sort them) (Ruff B033) 2023-06-13 13:07:39 +03:00
Aarni Koskela
8ce9b36e0f Upgrade ruff to 272 2023-06-13 13:07:06 +03:00
Aarni Koskela
2667f47ffb Remove stray space from SwinIR model URL 2023-06-13 13:00:05 +03:00
Aarni Koskela
bf67a5dcf4 Upscaler.load_model: don't return None, just use exceptions 2023-06-13 12:44:25 +03:00
Aarni Koskela
e3a973a68d Add TODO comments to sus model loads 2023-06-13 12:38:29 +03:00
Aarni Koskela
0afbc0c235 Fix up if "http" in ...: to be more sensible startswiths 2023-06-13 12:38:29 +03:00
Aarni Koskela
89352a2f52 Move load_file_from_url to modelloader 2023-06-13 12:38:28 +03:00
Aarni Koskela
165ab44f03 Use os.makedirs(..., exist_ok=True) 2023-06-13 12:35:43 +03:00
Danil Boldyrev
9a2da597c5 remove console.log 2023-06-12 22:21:42 +03:00
Danil Boldyrev
ee029a8cad Improved error output, improved settings menu 2023-06-12 22:19:22 +03:00
w-e-w
d80962681a remove fastapi.Response 2023-06-12 18:21:01 +09:00
w-e-w
b9664ab615 move _stop route to api 2023-06-12 18:15:27 +09:00
Su Wei
7e2d39a2d1 update model checkpoint switch code 2023-06-12 15:22:49 +08:00
w-e-w
9142be0a0d quit restart 2023-06-10 23:36:34 +09:00
arch-fan
5576a72322 fixed typos 2023-06-09 19:59:27 +00:00
AUTOMATIC
3b11f17a37 Merge branch 'dev' into release_candidate 2023-06-09 22:48:18 +03:00
AUTOMATIC
59419bd64a add changelog for 1.4.0 2023-06-09 22:47:58 +03:00
AUTOMATIC
cfdd1b9418 linter 2023-06-09 22:47:27 +03:00
AUTOMATIC1111
89e6c60546
Merge pull request #11092 from AUTOMATIC1111/Generate-Forever-during-generation
Allow activation of Generate Forever during generation
2023-06-09 22:33:23 +03:00
AUTOMATIC1111
d00139eea8
Merge pull request #11087 from AUTOMATIC1111/persistent_conds_cache
persistent conds cache
2023-06-09 22:32:49 +03:00
AUTOMATIC1111
b8d7506ebe
Merge pull request #11123 from akx/dont-die-on-bad-symlink-lora
Don't die when a LoRA is a broken symlink
2023-06-09 22:31:49 +03:00
AUTOMATIC1111
f9606b8826
Merge pull request #10295 from Splendide-Imaginarius/mk2-blur-mask
Split mask blur into X and Y components, patch Outpainting MK2 accordingly
2023-06-09 22:31:29 +03:00
AUTOMATIC1111
741bd71873
Merge pull request #11048 from DGdev91/force_python1_navi_renoir
Forcing Torch Version to 1.13.1 for RX 5000 series GPUs
2023-06-09 22:30:54 +03:00
Aarni Koskela
d75ed52bfc Don't die when a LoRA is a broken symlink
Fixes #11098
2023-06-09 13:26:36 +03:00
Splendide Imaginarius
72815c0211 Split Outpainting MK2 mask blur into X and Y components
Fixes unexpected noise in non-outpainted borders when using MK2 script.
2023-06-09 08:37:26 +00:00
Splendide Imaginarius
1503af60b0 Split mask blur into X and Y components
Prequisite to fixing Outpainting MK2 mask blur bug.
2023-06-09 08:36:33 +00:00
Su Wei
8ca34ad6d8 add model exists status check to modeuls/api/api.py , /sdapi/v1/options [POST] 2023-06-09 13:14:20 +08:00
w-e-w
46e4777fd6 Generate Forever during generation
Generate Forever during generation
2023-06-08 17:56:03 +09:00
w-e-w
7f2214aa2b persistent conds cache
Update shared.py
2023-06-08 14:27:22 +09:00
AUTOMATIC1111
cf28aed1a7
Merge pull request #11058 from AUTOMATIC1111/api-wiki
link footer API to Wiki when API is not active
2023-06-07 07:49:59 +03:00
AUTOMATIC1111
806ea639e6
Merge pull request #11066 from aljungberg/patch-1
Fix upcast attention dtype error.
2023-06-07 07:48:52 +03:00
Alexander Ljungberg
d9cc0910c8
Fix upcast attention dtype error.
Without this fix, enabling the "Upcast cross attention layer to float32" option while also using `--opt-sdp-attention` breaks generation with an error:

```
  File "/ext3/automatic1111/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 612, in sdp_attnblock_forward
    out = torch.nn.functional.scaled_dot_product_attention(q, k, v, dropout_p=0.0, is_causal=False)
RuntimeError: Expected query, key, and value to have the same dtype, but got query.dtype: float key.dtype: float and value.dtype: c10::Half instead.
```

The fix is to make sure to upcast the value tensor too.
2023-06-06 21:45:30 +01:00
DGdev91
62860c221e Skip force pyton and pytorch ver if TORCH_COMMAND already set 2023-06-06 15:43:32 +02:00
w-e-w
96e446218c link footer API to Wiki when API is not active 2023-06-06 18:58:44 +09:00
DGdev91
8646768801 Write "RX 5000 Series" instead of "Navi" in err 2023-06-06 10:03:20 +02:00
DGdev91
95d4d650d4 Check python version for Navi 1 only 2023-06-06 09:59:13 +02:00
DGdev91
e0d923bdf8 Force python1 for Navi1 only, use python_cmd for python 2023-06-06 09:55:49 +02:00
DGdev91
2788ce8c7b Fix error in webui.sh 2023-06-06 01:51:35 +02:00
DGdev91
8d98532b65 Forcing Torch Version to 1.13.1 for Navi and Renoir GPUs 2023-06-06 01:05:31 +02:00
AUTOMATIC1111
a009fe15fd
Merge pull request #11047 from AUTOMATIC1111/parse_generation_parameters_with_error
handles exception when parsing generation parameters from png info
2023-06-06 00:13:27 +03:00
w-e-w
851bf43520 print error and continue
print error and continue
2023-06-06 05:50:43 +09:00
Aarni Koskela
ba70a220e3 Remove a bunch of unused/vestigial code
As found by Vulture and some eyes
2023-06-05 22:43:57 +03:00
AUTOMATIC1111
0895c2369c
Merge pull request #11037 from AUTOMATIC1111/restart-autolaunch
fix rework-disable-autolaunch for new restart method
2023-06-05 20:57:31 +03:00
w-e-w
c2808f3040 SD_WEBUI_RESTARTING 2023-06-06 02:52:05 +09:00
w-e-w
eaace155ce restore old disable --autolaunch 2023-06-06 02:47:18 +09:00
AUTOMATIC1111
e89a248e2e
Merge pull request #11031 from akx/zoom-and-pan-namespace
Zoom and pan: namespace & simplify
2023-06-05 20:40:31 +03:00
AUTOMATIC1111
1dd8d571a4
Merge pull request #11043 from akx/restart-envvar
Restart: only do restart if running via the wrapper script
2023-06-05 20:06:40 +03:00
Aarni Koskela
46a5bd64ed Restart: only do restart if running via the wrapper script 2023-06-05 20:04:28 +03:00
w-e-w
1411a6e74b rework-disable-autolaunch 2023-06-06 01:09:30 +09:00
AUTOMATIC
18acc0b30d revert the message to how it was 2023-06-05 11:08:57 +03:00
AUTOMATIC1111
7a7a201d81
Merge pull request #10956 from akx/len
Simplify a bunch of `len(x) > 0`/`len(x) == 0` style expressions
2023-06-05 11:06:37 +03:00
Aarni Koskela
2d4c66f7b5 Zoom and Pan: simplify waitForOpts 2023-06-05 10:40:42 +03:00
Aarni Koskela
6163b38ad9 Zoom and Pan: use for instead of forEach 2023-06-05 10:37:00 +03:00
Aarni Koskela
afbb0b5f86 Zoom and Pan: simplify getElements (it's not actually async) 2023-06-05 10:37:00 +03:00
Aarni Koskela
68cda4f213 Zoom and Pan: use elementIDs from closure scope 2023-06-05 10:37:00 +03:00
Aarni Koskela
8fd20bd4c3 Zoom and Pan: move helpers into its namespace to avoid littering global scope 2023-06-05 10:36:55 +03:00
AUTOMATIC
9781f31f74 Merge branch 'master' into dev 2023-06-05 06:16:03 +03:00
AUTOMATIC
baf6946e06 Merge branch 'release_candidate' 2023-06-05 06:13:41 +03:00
AUTOMATIC1111
1e7e34337f
Merge pull request #11013 from ramyma/get_latent_upscale_modes_api
Get latent upscale modes API endpoint
2023-06-04 18:20:36 +03:00
ramyma
4faaf3e723 Add endpoint to get latent_upscale_modes for hires fix 2023-06-04 17:05:29 +03:00
AUTOMATIC
fbf88343de prevent calculating cons for second pass of hires fix when they are the same as for the first pass 2023-06-04 16:29:02 +03:00
AUTOMATIC
1ca5e76f7b fix for conds of second hires fox pass being calculated using first pass's networks, and add an option to revert to old behavior 2023-06-04 13:07:31 +03:00
AUTOMATIC1111
1c6dca9383
Merge pull request #10997 from AUTOMATIC1111/fix-conds-caching-with-extra-network
fix conds caching with extra network
2023-06-04 12:07:41 +03:00
AUTOMATIC1111
56bf522913
Merge pull request #10990 from vkage/sd_hijack_optimizations_bugfix
torch.cuda.is_available() check for SdOptimizationXformers
2023-06-04 11:34:32 +03:00
AUTOMATIC
2e23c9c568 fix the broken line for #10990 2023-06-04 11:33:51 +03:00
AUTOMATIC1111
0819383de0
Merge pull request #10975 from AUTOMATIC1111/restart3
A yet another method to restart webui.
2023-06-04 11:17:20 +03:00
AUTOMATIC1111
efc4c79b5e
Merge pull request #10980 from AUTOMATIC1111/sysinfo
Added sysinfo tab to settings
2023-06-04 11:16:32 +03:00
AUTOMATIC
aeba3cadd5 add whitelist for environment in the report
add extra link to view the report instead of downloading it
2023-06-04 11:16:00 +03:00
AUTOMATIC1111
b4b7e6e5f7
Merge pull request #11005 from daswer123/dev
Fixed bugs in the zoom builtin extensions and made the zoom function global
2023-06-04 10:59:25 +03:00
AUTOMATIC1111
7f28e8c445
Merge pull request #11006 from Vesnica/patch-1
Make save_pil_to_file to have same parameters with gradio's function
2023-06-04 10:58:14 +03:00
AUTOMATIC
f98f4f73aa infer styles from prompts, and an option to control the behavior 2023-06-04 10:56:48 +03:00
Vesnica
08f93da17c
Update ui_tempdir.py
Make override function have the same input parameters with original function
2023-06-04 14:20:23 +08:00
Danil Boldyrev
0432e37843 Correct definition zoom level
I changed the regular expression and now I always have to select scale from style.transfo
2023-06-04 04:17:55 +03:00
Danil Boldyrev
ad3d6d9a22 Fixed visual bugs 2023-06-04 03:38:21 +03:00
Danil Boldyrev
1a49178330 Made a function applyZoomAndPan isolated each instance
Isolated each instance of applyZoomAndPan, now if you add another element to the page, they will work correctly
2023-06-04 03:04:46 +03:00
Danil Boldyrev
dc273f7473 Fixed the redmask bug 2023-06-04 01:18:27 +03:00
w-e-w
0a277ab591 remove redone compare 2023-06-04 05:19:47 +09:00
w-e-w
1c9d1b0ee0 simplify self.extra_network_data 2023-06-04 05:19:34 +09:00
w-e-w
f098e726d3 fix conds caching with extra network 2023-06-04 04:24:44 +09:00
Vivek K. Vasishtha
b1a72bc7e2
torch.cuda.is_available() check for SdOptimizationXformers 2023-06-03 21:54:27 +05:30
Danil Boldyrev
3e3635b114 Made the applyZoomAndPan function global for other extensions 2023-06-03 19:24:05 +03:00
AUTOMATIC1111
30bbb8bce3
Merge pull request #10987 from off99555/dev
Fix missing ext_filter kwarg
2023-06-03 18:57:10 +03:00
Chanchana Sornsoontorn
68d8423288
Fix missing ext_filter kwarg 2023-06-03 22:28:00 +07:00
AUTOMATIC1111
b2fa0a921d
Merge pull request #10838 from breengles/img2img-batch-processing
Img2img batch processing
2023-06-03 17:23:41 +03:00
AUTOMATIC1111
80ae378f34
Merge pull request #10942 from ramyma/round-upscale-result-dims
Round upscaled dimensions only when not divisible by 8
2023-06-03 14:50:46 +03:00
ramyma
8c8c3617a7 Use a more concise calculation for dest dims 2023-06-03 14:41:12 +03:00
ramyma
31f57455dd Round upscaled dimensions only when not divisible by 8 2023-06-03 14:36:10 +03:00
AUTOMATIC
cd7ec5f728 lint 2023-06-03 14:00:37 +03:00
AUTOMATIC
7393c1f99c Added sysinfo tab to settings 2023-06-03 13:55:35 +03:00
AUTOMATIC
333e63c091 a yet another method to restart webui 2023-06-03 09:59:56 +03:00
AUTOMATIC1111
9d953c0e03
Merge pull request #10917 from AUTOMATIC1111/bug_template_cross_attention_optimization
Bug template cross attention optimization
2023-06-03 09:25:21 +03:00
AUTOMATIC1111
e0d8ce3d2b
Merge pull request #10946 from AUTOMATIC1111/fix-duplicate-optimizers
Fix duplicate Cross attention optimization after UI reload
2023-06-03 09:24:54 +03:00
AUTOMATIC1111
7fd53815d3
Merge pull request #10967 from waltercool/master
Added support for workarounds on Navi external GPU.
2023-06-03 09:09:25 +03:00
AUTOMATIC1111
b1fd2aaa8b
Merge pull request #10943 from catboxanon/sort
Allow dynamically sorting extra networks in UI
2023-06-03 09:05:22 +03:00
AUTOMATIC1111
08109b9bc0
Merge pull request #10902 from daswer123/dev
Improvement for zoom builtin extension
2023-06-03 09:02:40 +03:00
AUTOMATIC1111
58779b289e
Merge pull request #10957 from AUTOMATIC1111/fallback_version_info
fallback version info form CHANGELOG.md
2023-06-03 09:01:05 +03:00
w-e-w
df5a3cbefe fallback version info form CHANGELOG.md 2023-06-03 13:33:23 +09:00
w-e-w
d1bfc86ffc
Update modules/launch_utils.py
Co-authored-by: Aarni Koskela <akx@iki.fi>
2023-06-03 13:07:07 +09:00
Danil Boldyrev
5b682be59a small ui fix
In the error the user will see R instead of KeyR
2023-06-03 02:24:57 +03:00
Danil Boldyrev
1e0ab4015d Added the ability to swap the zoom hotkeys and resize the brush 2023-06-03 02:18:49 +03:00
catboxanon
9009e25cb1
Apply suggestions from code review
Co-authored-by: Aarni Koskela <akx@iki.fi>
2023-06-02 16:12:24 -04:00
Pablo Cholaky
8d970a4a97
Added support for workarounds on external GPU.
lspci detects VGA for main/integrated videocards and Display
for external videocards.

This commit should apply workarounds on computers with more than
one GPU. Useful for most laptops using weak iGPU and good dGPU.

Signed-off-by: Pablo Cholaky <waltercool@slash.cl>
2023-06-02 15:04:58 -04:00
Danil Boldyrev
d306d25e56 Made tooltip optional.
You can disable it in the settings.
Enabled by default
2023-06-02 19:10:28 +03:00
w-e-w
0dd6bca4f1 fallback version info form CHANGELOG.md 2023-06-02 22:02:21 +09:00
Aarni Koskela
51864790fd Simplify a bunch of len(x) > 0/len(x) == 0 style expressions 2023-06-02 15:07:10 +03:00
AUTOMATIC1111
6f754ab98b Merge pull request #10780 from akx/image-emb-fonts
Mark caption_image_overlay's textfont as deprecated; fix #10778
2023-06-02 14:36:22 +03:00
w-e-w
8f8405274c remove redundant 2023-06-02 17:18:42 +09:00
w-e-w
2bbe3f5f0a remove redundant call list_optimizers() 2023-06-02 16:51:15 +09:00
AUTOMATIC
eed7b2776e add changelog 2023-06-02 10:39:16 +03:00
AUTOMATIC1111
cbc38a903b Merge pull request #10905 from AUTOMATIC1111/fix-10896-pnginfo-parameters
fix 10896 pnginfo parameters
2023-06-02 10:37:35 +03:00
AUTOMATIC
eeb685b0e5 bump gradio version to fix tmp filenames for images 2023-06-02 10:34:59 +03:00
w-e-w
b617c634a8 Cross attention optimization
Cross attention optimization

cross attention optimization
2023-06-02 14:14:15 +09:00
catboxanon
4cc0cede6d lint fixes 2023-06-02 04:12:08 +00:00
catboxanon
7dca8e7698 Support dynamic sort of extra networks 2023-06-02 04:08:45 +00:00
Danil Boldyrev
38aca6f605 Added a hotkey repeat check to avoid bugs 2023-06-02 01:26:25 +03:00
Danil Boldyrev
68c4beab46 Added the ability to configure hotkeys via webui
Now you can configure the hotkeys directly through the settings

JS and Python scripts are tested and code style compliant
2023-06-02 01:04:17 +03:00
AUTOMATIC
cbe1799797 Merge branch 'master' into release_candidate 2023-06-01 21:36:48 +03:00
AUTOMATIC
3e995778fc Merge branch 'master' into dev 2023-06-01 21:36:06 +03:00
AUTOMATIC
b6af0a3809 Merge branch 'release_candidate' 2023-06-01 21:35:14 +03:00
AUTOMATIC
a9674359ca revert the erroneous change for model setting added in df02498d 2023-06-01 19:52:04 +03:00
Artem Kotov
ba110bf093
fallback to original file retrieving; skip img if mask not found
usage of `shared.walk_files` breaks controlnet extension
images are processed in different order 
which leads to unmatched img file used for img2img and img file used for controlnet 
(if no folder is specified for control net
or the same as img2img input dir used for it)
2023-06-01 15:44:55 +04:00
Artem Kotov
49f4b4be67
add subdir support for images, masks and output; search mask only in subdir 2023-06-01 11:29:56 +04:00
AUTOMATIC
a5e851028e add hiding and a colspans to startup profile table 2023-06-01 10:01:42 +03:00
AUTOMATIC
b3390a9840 Merge branch 'dev' into startup-profile 2023-06-01 08:42:50 +03:00
AUTOMATIC
8c3e64f4f6 update readme 2023-06-01 08:13:09 +03:00
AUTOMATIC
3ee1238630 revert default cross attention optimization to Doggettx
make --disable-opt-split-attention command line option work again
2023-06-01 08:12:21 +03:00
AUTOMATIC
36888092af revert default cross attention optimization to Doggettx
make --disable-opt-split-attention command line option work again
2023-06-01 08:12:06 +03:00
AUTOMATIC
17a66931da update readme 2023-06-01 07:29:52 +03:00
AUTOMATIC
915d1da1cd assign devices.dtype early because it's needed before the model is loaded 2023-06-01 07:28:46 +03:00
AUTOMATIC
f1533de982 assign devices.dtype early because it's needed before the model is loaded 2023-06-01 07:28:20 +03:00
AUTOMATIC1111
e980a4bd88
Merge pull request #10905 from AUTOMATIC1111/fix-10896-pnginfo-parameters
fix 10896 pnginfo parameters
2023-06-01 06:54:19 +03:00
w-e-w
0bf09c30c6 remove redundant 2023-06-01 06:34:53 +09:00
w-e-w
72f6367b9b fix 10896 pnginfo parameters 2023-06-01 06:24:37 +09:00
AUTOMATIC
884435796a add changelog 2023-05-31 23:08:31 +03:00
AUTOMATIC
8a561d94e6 use ui_reorder_list rather than ui_reorder for UI reorder option to make the program not break when reverting to old version 2023-05-31 23:05:44 +03:00
Danil Boldyrev
c5d70fe1d3 Fixed the problem with sticking to the mouse, created a tooltip 2023-05-31 23:02:49 +03:00
AUTOMATIC
3690e4e82c fix [Bug]: LoRA don't apply on dropdown list sd_lora #10880 2023-05-31 22:57:27 +03:00
AUTOMATIC1111
6427ffde4d Merge pull request #10808 from AUTOMATIC1111/fix-disable-png-info
fix disable png info
2023-05-31 22:56:56 +03:00
AUTOMATIC1111
c63d46ceb8 Merge pull request #10804 from AUTOMATIC1111/fix-xyz-clip
Fix get_conds_with_caching()
2023-05-31 22:54:51 +03:00
AUTOMATIC1111
fae8bdfa48 Merge pull request #10785 from nyqui/fix-hires.fix
fix "hires. fix" prompt sharing same labels with txt2img_prompt
2023-05-31 22:54:24 +03:00
AUTOMATIC
10dbee0d59 add quoting for infotext values that have a colon in them 2023-05-31 22:54:00 +03:00
AUTOMATIC
48875af7a1 fix [Bug]: LoRA don't apply on dropdown list sd_lora #10880 2023-05-31 22:45:16 +03:00
AUTOMATIC
df02498d03 add an option to show selected setting in main txt2img/img2img UI
split some code from ui.py into ui_settings.py ui_gradio_edxtensions.py
add before_process callback for scripts
add ability for alwayson scripts to specify section and let user reorder those sections
2023-05-31 22:40:09 +03:00
AUTOMATIC
583fb9f066 change UI reorder setting to multiselect 2023-05-31 20:31:17 +03:00
AUTOMATIC
05933840f0 rename print_error to report, use it with together with package name 2023-05-31 19:56:37 +03:00
AUTOMATIC1111
d67ef01f62
Merge pull request #10780 from akx/image-emb-fonts
Mark caption_image_overlay's textfont as deprecated; fix #10778
2023-05-31 19:37:58 +03:00
AUTOMATIC1111
726f3feb2b
Merge pull request #10863 from akx/ui-current-tab-top-level
Frontend: only look at top-level tabs, not nested tabs
2023-05-31 19:34:59 +03:00
AUTOMATIC1111
e72013ea67
Merge pull request #10638 from catboxanon/patch/revert-10586
Revert discarding penultimate sigma for DPM-Solver++(2M) SDE
2023-05-31 19:34:20 +03:00
AUTOMATIC1111
80583263a2
Merge pull request #10784 from AUTOMATIC1111/update-deps
Update xformers to 0.0.20
2023-05-31 19:32:13 +03:00
AUTOMATIC1111
9013559eef
Merge pull request #10783 from akx/sync-req
Sync requirements files
2023-05-31 19:31:27 +03:00
AUTOMATIC1111
177d4b6828
Merge branch 'dev' into sync-req 2023-05-31 19:31:19 +03:00
AUTOMATIC1111
881de0df38
Merge pull request #10803 from klimaleksus/refactoring-for-embedding-merge
Refactor EmbeddingDatabase.register_embedding() to allow unregistering
2023-05-31 19:29:47 +03:00
AUTOMATIC1111
670195d720
Merge pull request #10808 from AUTOMATIC1111/fix-disable-png-info
fix disable png info
2023-05-31 19:20:19 +03:00
AUTOMATIC1111
003ed0f087
Merge pull request #10813 from AUTOMATIC1111/clarify-issue-template
clarify issue template
2023-05-31 19:18:49 +03:00
AUTOMATIC1111
8598587f1c
Merge pull request #10806 from akx/upgrade-transformers
Upgrade transformers from 4.25.1 to 4.29.2
2023-05-31 19:17:43 +03:00
AUTOMATIC1111
d9bd7ada76
Merge pull request #10820 from akx/report-error
Add & use modules.errors.print_error
2023-05-31 19:16:14 +03:00
AUTOMATIC1111
52b8752e62
Merge branch 'dev' into report-error 2023-05-31 19:15:21 +03:00
AUTOMATIC1111
78a602ae8c
Merge pull request #10796 from ramyma/round-upscale-result-dims
Round down scale destination dimensions to nearest multiple of 8
2023-05-31 19:06:07 +03:00
AUTOMATIC1111
2fcd64b9e8
Merge pull request #10805 from akx/gitpython-no-persistent-processes
Patch GitPython to not use leaky persistent processes
2023-05-31 19:05:03 +03:00
AUTOMATIC1111
741ab6bed1
Merge pull request #10788 from yoinked-h/patch-1
typo
2023-05-31 18:58:06 +03:00
AUTOMATIC1111
11a6a669d1
Merge pull request #10814 from missionfloyd/gamepad-disconnect
Only poll gamepads while connected
2023-05-31 18:57:38 +03:00
AUTOMATIC1111
58dbd0ea4d
Merge pull request #10759 from daswer123/dev
Add the ability to zoom and move the canvas
2023-05-31 18:52:22 +03:00
AUTOMATIC1111
3e48f7d30c
Merge pull request #10804 from AUTOMATIC1111/fix-xyz-clip
Fix get_conds_with_caching()
2023-05-31 18:47:24 +03:00
AUTOMATIC1111
0b0f60f954
Merge pull request #10856 from akx/untamed
Remove taming_transformers dependency
2023-05-31 18:46:15 +03:00
AUTOMATIC1111
69f49a935a
Merge pull request #10845 from DragonHawkAlpha/master
Added VAE listing to web API. Via: /sdapi/v1/sd-vae
2023-05-31 18:44:46 +03:00
AUTOMATIC1111
fec089d8f1
Merge pull request #10878 from willfrey/patch-1
Fix typo in `--update-check` help message
2023-05-31 18:41:05 +03:00
AUTOMATIC1111
c3a61425b8
Merge pull request #10848 from DavidQChuang/master
Fix s_min_uncond default type int
2023-05-31 18:40:27 +03:00
AUTOMATIC1111
e7439b5cbe
Merge pull request #10785 from nyqui/fix-hires.fix
fix "hires. fix" prompt sharing same labels with txt2img_prompt
2023-05-31 18:40:00 +03:00
Will Frey
fb1cb6d364
Fix typo in --update-check help message
Change `chck` to `check`
2023-05-30 22:05:12 -04:00
Aarni Koskela
f81931c591 Frontend: only look at top-level tabs, not nested tabs
Refs https://github.com/adieyal/sd-dynamic-prompts/issues/459#issuecomment-1568543926
2023-05-30 17:54:29 +03:00
Danil Boldyrev
c928c228af a small fix for very wide images, because of the scroll bar was the wrong zoom 2023-05-30 16:35:52 +03:00
Aarni Koskela
5fcdaa6a7f Vendor in the single module used from taming_transformers; remove taming_transformers dependency
(and fix the two ruff complaints)
2023-05-30 12:47:57 +03:00
missionfloyd
baa81126c4 Move gamepaddisconnected listener 2023-05-29 23:52:19 -06:00
David Chuang
3fc8aeb48d
Fix s_min_uncond default type int 2023-05-29 20:17:25 -04:00
James
42e020c1c1 Added VAE listing to web API. 2023-05-29 22:25:43 +01:00
Danil Boldyrev
8ab4e55fe3 Moved the script to the extension build-in 2023-05-29 21:39:10 +03:00
Artem Kotov
23314a6e27 ruffed 2023-05-29 21:38:49 +04:00
Artem Kotov
6c610a8a95 add scale_by to batch processing 2023-05-29 20:47:20 +04:00
Artem Kotov
c8e67b6732 improve filename matching for mask
we should not rely that mask filename will be of the same extension
as the image filename so better pattern matching is added
2023-05-29 20:39:24 +04:00
w-e-w
4a449375a2 fix get_conds_with_caching() 2023-05-30 01:07:35 +09:00
w-e-w
123641e4ec Revert "fix xyz clip"
This reverts commit edd766e70a.
2023-05-30 01:06:23 +09:00
Aarni Koskela
00dfe27f59 Add & use modules.errors.print_error where currently printing exception info by hand 2023-05-29 09:17:30 +03:00
Aarni Koskela
77a10c62c9 Patch GitPython to not use leaky persistent processes 2023-05-29 08:31:11 +03:00
missionfloyd
679e873875
Update imageviewerGamepad.js 2023-05-28 20:49:46 -06:00
missionfloyd
df59b74ced Only poll gamepads while connected 2023-05-28 20:42:47 -06:00
w-e-w
7dfee8a3bd clarify issue template 2023-05-29 11:01:58 +09:00
w-e-w
2aca613a61 fix disable png info 2023-05-29 07:30:32 +09:00
Aarni Koskela
018f77f0b8 Upgrade transformers
Refs https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9035#issuecomment-1485461039
2023-05-29 00:58:52 +03:00
w-e-w
edd766e70a fix xyz clip 2023-05-29 05:40:38 +09:00
klimaleksus
4635f31270
Refactor EmbeddingDatabase.register_embedding() to allow unregistering 2023-05-29 01:09:59 +05:00
ramyma
3539885f0e Round down scale destination dimensions to nearest multiple of 8 2023-05-28 21:41:54 +03:00
Danil Boldyrev
4d7b63f489 changed the document to gradioApp() 2023-05-28 20:32:21 +03:00
Danil Boldyrev
f48bce5f68 Corrected the code according to Code style 2023-05-28 20:22:35 +03:00
yoinked
905c3fe23e
typo
vidocard -> videocard
2023-05-28 08:39:00 -07:00
nyqui
bae2fca523
fix "hires. fix" prompt/neg sharing same labels as txt2img_prompt/negative_prompt 2023-05-28 22:59:29 +09:00
Aarni Koskela
c1a5068ebe Synchronize requirements/requirements_versions
* Remove deps not listed in _versions from requirements

* Omit versions when they don't match _versions
2023-05-28 16:42:39 +03:00
Sakura-Luna
cf07983a6e
Upgrade xformers 2023-05-28 20:42:19 +08:00
Aarni Koskela
3d42411c3d Sort requirements files 2023-05-28 15:40:14 +03:00
Aarni Koskela
1013758933 Mark caption_image_overlay's textfont as deprecated; fix #10778 2023-05-28 14:48:50 +03:00
AUTOMATIC
b957dcfece add quoting for infotext values that have a colon in them 2023-05-28 10:39:57 +03:00
AUTOMATIC
f9809e6e40 Merge branch 'master' into dev 2023-05-28 06:59:20 +03:00
Danil Boldyrev
9e69009d1b Improve reset zoom when toggle tabs 2023-05-28 01:56:48 +03:00
Danil Boldyrev
433c70b403 Formatted Prettier added fullscreen mode canvas expansion function 2023-05-28 01:31:23 +03:00
Danil Boldyrev
662af75973 Ability to zoom and move the canvas 2023-05-27 22:54:45 +03:00
AUTOMATIC
20ae71faa8 fix linter issue for 1.3.0 2023-05-27 20:23:16 +03:00
AUTOMATIC
6095ade147 fix serving images that have already been saved without temp files function that broke after updating gradio 2023-05-27 20:19:10 +03:00
AUTOMATIC
dd377637ca update the changelog to mention 1.3.0 version 2023-05-27 20:16:33 +03:00
AUTOMATIC
50906bf78b Merge branch 'release_candidate' 2023-05-27 20:13:26 +03:00
AUTOMATIC1111
9bc037d045
Merge pull request #10655 from fumitakayano/fumitakayano
Added format to specify VAE filename for generated image filenames
2023-05-27 20:11:21 +03:00
AUTOMATIC1111
d0e8fa627d
Merge pull request #10569 from strelokhalfer/pr
Change 'images.zip' to pattern settings
2023-05-27 20:10:17 +03:00
AUTOMATIC1111
2fc2fbb4ea
Merge pull request #10708 from akx/on-ui-update-throttled
Add onAfterUiUpdate callback
2023-05-27 20:09:15 +03:00
AUTOMATIC1111
5d29672b32
Merge pull request #10697 from catboxanon/patch/image-info
Cleaner image metadata read
2023-05-27 20:07:51 +03:00
AUTOMATIC1111
d92a6acf0e
Merge pull request #10739 from linkoid/fix-ui-debug-mode-exit
Fix --ui-debug-mode exit
2023-05-27 20:02:07 +03:00
AUTOMATIC1111
348abeb99d
Merge pull request #10722 from maybe-hello-world/master
Download ROCm for AMD GPU only if NVIDIA is not presented
2023-05-27 19:56:18 +03:00
AUTOMATIC1111
ba812b4495
Merge pull request #10718 from kernelmethod/libtcmalloc_fixes
Small fixes to prepare_tcmalloc for Debian/Ubuntu compatibility
2023-05-27 19:55:02 +03:00
AUTOMATIC1111
0666f7c597
Merge pull request #10694 from akx/tooltipsies
Tooltip fixes & optimizations
2023-05-27 19:54:09 +03:00
AUTOMATIC
e8e7fe11e9 updates for the noise schedule settings 2023-05-27 19:53:09 +03:00
AUTOMATIC
654234ec56 Merge remote-tracking branch 'KohakuBlueleaf/custom-k-sched-settings' into dev 2023-05-27 19:08:02 +03:00
AUTOMATIC
633867ecc6 fix serving images that have already been saved without temp files function that broke after updating gradio 2023-05-27 19:06:49 +03:00
AUTOMATIC
339b531570 custom unet support 2023-05-27 15:47:33 +03:00
linkoid
1f0fdede17 Show full traceback in get_sd_model()
to reveal if an error is caused by an extension
2023-05-26 15:25:31 -04:00
linkoid
3829afec36 Remove exit() from select_checkpoint()
Raising a FileNotFoundError instead.
2023-05-26 15:08:53 -04:00
Roman Beltiukov
bdc371983e
Update webui.sh 2023-05-26 02:09:09 -07:00
missionfloyd
6645f23c4c
Merge branch 'dev' into reorder-hotkeys 2023-05-25 18:53:33 -06:00
missionfloyd
43bdaa2f0e Make ctrl+left/right optional 2023-05-25 18:49:28 -06:00
Roman Beltiukov
b2530c965c
Merge branch 'dev' into master 2023-05-25 15:10:10 -07:00
Roman Beltiukov
09d9c3d287
change to AMD only if NVIDIA is not presented 2023-05-25 14:45:05 -07:00
kernelmethod
d29fe44e46 Small fixes to prepare_tcmalloc for Debian/Ubuntu compatibility
- /usr/sbin (where ldconfig is usually located) is not typically on users' PATHs by default, so we set that variable before trying to run ldconfig.
- The libtcmalloc library is called libtcmalloc_minimal on Debian/Ubuntu systems. We now check whether libtcmalloc_minimal exists when running prepare_tcmalloc.
2023-05-25 14:51:47 -04:00
catboxanon
60062b51d8
Remove try/except in img metadata read 2023-05-25 08:33:40 -04:00
Aarni Koskela
dc7a1bbb1c Use onAfterUiUpdate where possible 2023-05-25 09:09:13 +03:00
Aarni Koskela
bc53ecf298 Add onAfterUiUpdate callback 2023-05-25 09:09:01 +03:00
Aarni Koskela
54696dce05 Document on* handlers (for extension authors' sake) 2023-05-25 09:03:14 +03:00
Aarni Koskela
9574ebe212 Merge executeCallbacks and runCallback, simplify + optimize 2023-05-25 09:02:41 +03:00
Aarni Koskela
f661fb0fd3 Just use console.error, it's in all browsers 2023-05-25 09:00:45 +03:00
catboxanon
7a1bbf99da
Cleaner image metadata read 2023-05-24 16:41:22 -04:00
Aarni Koskela
32b0f7c9bb Add support for tooltips on dropdown options 2023-05-24 20:45:05 +03:00
Aarni Koskela
b82d4a65fe Restore support for dropdown tooltips 2023-05-24 20:42:47 +03:00
Aarni Koskela
d66c64b9d7 Optimize tooltip checks
* Instead of traversing tens of thousands of text nodes, only look at elements and their children
* Debounce the checks to happen only every one second
2023-05-24 20:42:46 +03:00
strelokhalfer
fb5d0ef209 Changed 'images.zip' to generation by pattern 2023-05-24 18:17:02 +03:00
Kohaku-Blueleaf
a69b71a37f use Schedule instead of Sched 2023-05-24 20:40:37 +08:00
Kohaku-Blueleaf
4b88e24ebe improvements
See:
https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/10649#issuecomment-1561047723
2023-05-24 20:35:58 +08:00
Kohaku-Blueleaf
1601fccebc Use automatic instead of None/default 2023-05-24 00:18:09 +08:00
Kohaku-Blueleaf
27962ded4a Fix ruff error 2023-05-23 23:50:19 +08:00
AUTOMATIC
a6e653be26 possible fix for empty list of optimizations #10605 2023-05-23 18:49:15 +03:00
AUTOMATIC
0e1c41998a fix bad styling for thumbs view in extra networks #10639 2023-05-23 18:49:15 +03:00
Kohaku-Blueleaf
72377b0251 Use type to determine if it is enable 2023-05-23 23:48:23 +08:00
AUTOMATIC
b186045fee possible fix for empty list of optimizations #10605 2023-05-23 18:02:09 +03:00
AUTOMATIC
3f50b7d71c fix bad styling for thumbs view in extra networks #10639 2023-05-23 14:07:00 +03:00
fumitaka.yano
1db7d21283 Subject:.
Improvements to handle VAE filenames in generated image filenames

Body:.
1) Added new line 24 to import sd_vae module.
2) Added new method get_vae_filename at lines 340-349 to obtain the VAE filename to be used for image generation and further process it to extract only the filename by splitting it with a dot symbol.
3) Added a new lambda function 'vae_filename' at line 373 to handle VAE filenames.

Reason:.
A function was needed to get the VAE filename and handle it in the program.

Test:.
We tested whether we could use this new functionality to get the expected file names.
The correct behaviour was confirmed for the following commonly distributed VAE files.
vae-ft-mse-840000-ema-pruned.safetensors -> vae-ft-mse-840000-ema-pruned
anything-v4.0.vae.pt -> anything-v4.0

ruff response:.
There were no problems with the code I added.

There was a minor configuration error in a line I did not modify, but I did not modify it as it was not relevant to this modification.
Logged.
images.py:426:56: F841 [*] Local variable `_` is assigned to but never used
images.py:432:43: F841 [*] Local variable `_` is assigned to but never used

Impact:.
This change makes it easier to retrieve the VAE filename used for image generation and use it in the programme.
2023-05-23 15:56:08 +09:00
Kohaku-Blueleaf
78aed1fa4a Fix xyz 2023-05-23 11:47:32 +08:00
Kohaku-Blueleaf
70650f87a4 Use better way to impl 2023-05-23 11:34:51 +08:00
missionfloyd
dafe519363 Fix lint errors 2023-05-22 21:23:39 -06:00
Kohaku-Blueleaf
1846ad36a3 Use settings instead of main interface 2023-05-23 10:58:57 +08:00
missionfloyd
468056958b Add reorder hotkeys
Shifts selected items with ctrl+left/right
2023-05-22 20:46:25 -06:00
Kohaku-Blueleaf
ec1608308c Merge branch 'custom-k-sched' of https://github.com/KohakuBlueleaf/stable-diffusion-webui into custom-k-sched 2023-05-23 09:55:31 +08:00
Kohaku-Blueleaf
89c44bbc15 Add hint for custom k_diffusion scheduler 2023-05-23 09:52:15 +08:00
Kohaku-Blueleaf
38aaad654b
Better hint for user
Co-authored-by: catboxanon <122327233+catboxanon@users.noreply.github.com>
2023-05-23 09:38:30 +08:00
AUTOMATIC1111
80a723cbcf
Merge pull request #10644 from ArthurHeitmann/fix-inpainting-canvas-noise
Fix for #10643 (Inpainting mask sometimes not working)
2023-05-22 23:22:34 +03:00
ArthurHeitmann
e1c44267ea Fix for #10643 (pixel noise in webui inpainting canvas breaking inpainting, so that it behaves like plain img2img) 2023-05-22 21:56:26 +02:00
AUTOMATIC1111
809001fe41
Merge pull request #10623 from akx/bump-gradio
Bump gradio to 3.32
2023-05-22 22:18:05 +03:00
AUTOMATIC1111
d77ba18d5d
Merge pull request #10635 from prodialabs/master
disable `timeout_keep_alive`: fixes #10625 #10510 #10474
2023-05-22 22:17:25 +03:00
catboxanon
51d672890d
Revert #10586 2023-05-22 13:06:57 -04:00
Kohaku-Blueleaf
403b304162 use sigma_max/min in model if sigma_max/min is 0 2023-05-23 00:29:38 +08:00
Kohaku-Blueleaf
65a87ccc9b Add error information for recursion error 2023-05-23 00:09:49 +08:00
Kohaku-Blueleaf
302d95c726 Minor naming fixes 2023-05-22 23:43:06 +08:00
Kohaku-Blueleaf
4365c35bf9 Avoid loop import 2023-05-22 23:41:14 +08:00
Kohaku-Blueleaf
5dfb1f597b remove not related code 2023-05-22 23:36:16 +08:00
Kohaku-Blueleaf
7dc9d9e27e only add metadata when k_sched is actually been used 2023-05-22 23:34:16 +08:00
Kohaku-Blueleaf
7882f76da4 Replace karras by k_diffusion, fix gen info 2023-05-22 23:26:28 +08:00
Kohaku-Blueleaf
f821051443 Change karras to kdiffusion 2023-05-22 23:09:03 +08:00
Kohaku-Blueleaf
e6269cba7f Add dropdown for scheduler type 2023-05-22 23:02:05 +08:00
Monty Anderson
efc9853059 modules/api/api.py: disable timeout_keep_alive 2023-05-22 15:52:44 +01:00
Kohaku-Blueleaf
90ec557d60 remove debug print 2023-05-22 22:06:13 +08:00
Kohaku-Blueleaf
a104879869 Add custom karras scheduler 2023-05-22 21:52:46 +08:00
AUTOMATIC
cc2f6e3b7b fix error in dragdrop logic 2023-05-22 15:40:10 +03:00
Aarni Koskela
47b669bc9f Upgrade Gradio, remove docs URL hack 2023-05-22 09:53:24 +03:00
AUTOMATIC
ee65e72931 repair file paste for Firefox from #10615
remove animation when pasting files into prompt
rework two dragdrop js files into one
2023-05-22 09:49:59 +03:00
AUTOMATIC1111
0cbcc4d828
Merge pull request #10611 from akx/disable-token-counters
Add option to disable token counters
2023-05-22 08:09:48 +03:00
AUTOMATIC1111
ee2f4fb92d
Merge pull request #10615 from missionfloyd/text-drag-fix
Fix dragging text to prompt
2023-05-22 07:15:44 +03:00
AUTOMATIC1111
8137bdba61
Merge branch 'dev' into text-drag-fix 2023-05-22 07:15:34 +03:00
missionfloyd
a862428902 Fix dragging text to prompt 2023-05-21 18:17:32 -06:00
AUTOMATIC
3366e494a1 option to pad prompt/neg prompt to be same length 2023-05-22 00:13:53 +03:00
Aarni Koskela
618c59b01d Add option to disable prompt token counters 2023-05-21 23:25:06 +03:00
Aarni Koskela
5ed970b949 Move token counters to separate JS file, fix names 2023-05-21 23:25:06 +03:00
AUTOMATIC
8faac8b963 run basic torch calculation at startup in parallel to reduce the performance impact of first generation 2023-05-21 21:55:14 +03:00
AUTOMATIC
1f3182924b Merge branch 'dev' into release_candidate 2023-05-21 17:37:09 +03:00
AUTOMATIC
fdaf0147b6 update readme 2023-05-21 17:36:40 +03:00
AUTOMATIC
fe73d6439a Revert "change width/heights slider steps to 64 from 8"
This reverts commit 9a86932c8b.
2023-05-21 17:35:19 +03:00
AUTOMATIC
f9fe5e5f9d reworking launch.py: add references to renamed file 2023-05-21 16:27:34 +03:00
AUTOMATIC
4b07984d1b reworking launch.py: rename 2023-05-21 16:27:34 +03:00
AUTOMATIC1111
38a2324dc3
Merge pull request #10580 from akx/add-some-future-annotations
Add some future annotations
2023-05-21 13:43:29 +03:00
AUTOMATIC1111
6fe85e8d5b
Merge pull request #10581 from shinshin86/readme-mac-shortcut
[README] Update keyboard shortcut instructions for MacOS users
2023-05-21 13:42:34 +03:00
AUTOMATIC
696f16e901 revert git describe --always --tags for extensions because it seems to be causing issues 2023-05-21 13:30:09 +03:00
AUTOMATIC1111
8e9188aa5a
Merge pull request #10564 from AUTOMATIC1111/extensions-clone-depth-1
extensions clone --filter=blob:none
2023-05-21 11:06:26 +03:00
w-e-w
cd03317c05 --filter=blob:none
Co-Authored-By: Aarni Koskela <akx@iki.fi>
Co-Authored-By: catboxanon <122327233+catboxanon@users.noreply.github.com>
2023-05-21 16:42:54 +09:00
AUTOMATIC1111
40a61f54e6
Merge pull request #10586 from catboxanon/patch/fix-dpmpp_2m_sde
Discard penultimate sigma for DPM-Solver++(2M) SDE
2023-05-21 10:08:41 +03:00
catboxanon
9a442702d1
Discard penultimate sigma for dpmpp_2m_sde 2023-05-21 01:01:59 -04:00
AUTOMATIC
31545abe14 add DPM-Solver++(2M) SDE from new k-diffusion 2023-05-21 07:31:51 +03:00
AUTOMATIC
0cc05fc492 work on startup profile display 2023-05-21 00:41:41 +03:00
Aarni Koskela
df004be2fc Add a couple from __future__ import annotationses for Py3.9 compat 2023-05-21 00:26:16 +03:00
AUTOMATIC1111
3605407033
Merge pull request #10576 from catboxanon/patch/hires-prompt-edit-attn
Support edit attention keyboard shortcuts in hires fix prompts
2023-05-20 23:23:53 +03:00
catboxanon
373903d851 hiresfix prompt: add classes, update css sel 2023-05-20 19:34:50 +00:00
AUTOMATIC
05e6fc9aa9 Merge branch 'ui-selection-for-cross-attention-optimization' into dev 2023-05-20 22:29:51 +03:00
AUTOMATIC1111
cc6c0fc70a
Merge pull request #10557 from akx/dedupe-webui-boot
Refactor & deduplicate web UI boot code
2023-05-20 22:24:15 +03:00
AUTOMATIC1111
db1ce5aa26
Merge pull request #10578 from anonCantCode/dev
Preserve Python 3.9 compatibility
2023-05-20 22:11:03 +03:00
catboxanon
b2b06eee02
Support edit attn shortcut in hires fix prompts 2023-05-20 13:31:18 -04:00
shinshin86
6a676cc185 Update keyboard shortcut instructions for MacOS users in text selection guidance 2023-05-20 23:14:47 +09:00
w-e-w
bf5e5f4269 extensions clone depth 1 2023-05-20 15:08:08 +09:00
anonCantCode
0b6ca8e77b
preserve declarations 2023-05-20 11:13:03 +05:30
anonCantCode
3758744eb6
Use Optional[] to preserve Python 3.9 compatability 2023-05-20 06:27:12 +05:30
AUTOMATIC
39ec4f06ff calculate hashes for Lora
add lora hashes to infotext
when pasting infotext, use infotext's lora hashes to find local loras for <lora:xxx:1> entries whose hashes match loras the user has
2023-05-19 22:59:29 +03:00
AUTOMATIC
87702febe0 allow hiding buttons in ui-config.json 2023-05-19 19:04:20 +03:00
AUTOMATIC1111
0d84055eb6
Merge pull request #10291 from akx/test-overhaul
Test overhaul
2023-05-19 18:59:31 +03:00
AUTOMATIC
9a86932c8b change width/heights slider steps to 64 from 8 2023-05-19 18:49:39 +03:00
AUTOMATIC
78dd988e12 simplify PR page 2023-05-19 18:47:19 +03:00
Aarni Koskela
793a491923 Overhaul tests to use py.test 2023-05-19 17:42:34 +03:00
Aarni Koskela
71f4a4afdf Deduplicate webui.py initial-load/reload code 2023-05-19 17:38:42 +03:00
Aarni Koskela
0f28aee9cd Refactor gradio auth 2023-05-19 17:35:51 +03:00
Aarni Koskela
674e80c625 Note pending PR for app_kwargs 2023-05-19 17:35:51 +03:00
Aarni Koskela
8a178e6717 Refactor configure opts_onchange out 2023-05-19 17:35:51 +03:00
Aarni Koskela
8200e0c27b Refactor configure_sigint_handler out 2023-05-19 17:35:51 +03:00
Aarni Koskela
1482c89376 Refactor validate_tls_options out, fix typo (keyfile was there twice) 2023-05-19 17:35:51 +03:00
AUTOMATIC1111
d41a31a508
Merge pull request #10552 from akx/eslint-moar
More Eslint fixes
2023-05-19 16:34:27 +03:00
AUTOMATIC1111
a6bf4aae30
Merge pull request #10550 from akx/git-blame-ignore-revs
Add .git-blame-ignore-revs
2023-05-19 16:28:22 +03:00
Aarni Koskela
4897e5277b Make load_scripts create new runners (removes reload_scripts) 2023-05-19 15:49:53 +03:00
Aarni Koskela
a0005121ae Simplify CORS middleware configuration 2023-05-19 15:37:13 +03:00
Aarni Koskela
21ee46eea7 Deduplicate default extra network registration 2023-05-19 15:35:16 +03:00
Aarni Koskela
de3abc29ae Fix typo "intialize" 2023-05-19 15:27:23 +03:00
Aarni Koskela
67d4360453 get_tab_index(): use a for loop with early-exit for performance 2023-05-19 13:06:12 +03:00
Aarni Koskela
563e88dd91 Replace args_to_array (and facsimiles) with Array.from 2023-05-19 13:05:26 +03:00
Aarni Koskela
3909c2b2a0 eslintrc: enable no-redeclare but with builtinGlobals: false 2023-05-19 12:57:38 +03:00
Aarni Koskela
247f371d3e eslintrc: mark most globals read-only 2023-05-19 12:57:38 +03:00
Aarni Koskela
958d68fb14 eslintrc: Use a file-local global comment for module 2023-05-19 12:46:44 +03:00
Aarni Koskela
208f066e0e eslintrc: Sort eslint rules 2023-05-19 12:46:41 +03:00
Aarni Koskela
2725dfd8a6 Fix ruff lint 2023-05-19 12:37:34 +03:00
Aarni Koskela
330f14d27a Add .git-blame-ignore-revs 2023-05-19 12:34:06 +03:00
lenankamp
ff6acd35d0
Update img2img.py
Hopefully corrected the white space issue
2023-05-19 03:20:19 -04:00
AUTOMATIC
2140bd1c10 make it actually work after suggestions 2023-05-19 10:05:07 +03:00
AUTOMATIC
994f56c3f9 linter fixes 2023-05-19 09:54:55 +03:00
AUTOMATIC1111
fe7bcbe340
Merge pull request #10534 from thot-experiment/dev
rewrite uiElementIsVisible
2023-05-19 09:53:02 +03:00
Thottyottyotty
7b61acbd35 split visibility method and sort instead
split out the visibility method for pasting and use a sort inside the paste handler to prioritize on-screen fields rather than targeting ONLY on screen fields
2023-05-18 23:43:01 -07:00
AUTOMATIC1111
1e5afd4fa9
Apply suggestions from code review
Co-authored-by: Aarni Koskela <akx@iki.fi>
2023-05-19 09:17:36 +03:00
AUTOMATIC1111
8c1148b9ea
Merge pull request #10548 from akx/spel-chek-changelog
Spel chek changelog some
2023-05-19 09:14:23 +03:00
AUTOMATIC
df6fffb054 change upscalers to download models into user-specified directory (from commandline args) rather than the default models/<...> 2023-05-19 09:09:18 +03:00
AUTOMATIC
379fd6204d make links to http://<...>.git git extensions work in the extension tab 2023-05-19 09:09:17 +03:00
Aarni Koskela
7569677e9e Spel chek changelog some 2023-05-19 08:35:16 +03:00
AUTOMATIC1111
e38e7dbfb9
Merge pull request #10529 from ryankashi/master
Added /sdapi/v1/refresh-loras api checkpoint post request
2023-05-19 08:04:13 +03:00
Thottyottyotty
e373fd0c00 rewrite uiElementIsVisible
rewrite visibility checking to be more generic/cleaner as well as add functionality to check if the element is scrolled on screen for more intuitive paste-target selection
2023-05-18 16:09:09 -07:00
ryankashi
4dd5559162 Added the refresh-loras post request 2023-05-18 14:12:01 -07:00
AUTOMATIC
8a3d232839 fix linter issues 2023-05-19 00:03:27 +03:00
AUTOMATIC
a375acdd26 update CHANGELOG 2023-05-19 00:01:52 +03:00
AUTOMATIC
a6bbc6aa8c set Navigate image viewer with gamepad option to false by default, by request 2023-05-18 23:59:31 +03:00
AUTOMATIC1111
4f42acd9ba
Merge pull request #10524 from kamnxt/fix-xyz-hashes
Use name in xyz_grid
2023-05-18 23:46:39 +03:00
Kamil Krzyżanowski
161b2944b8 Use name instead of hash in xyz_grid
X/Y/Z grid was still using the old hash, prone to collisions. This changes it to use the name instead.

Should fix #10521.
2023-05-18 22:27:04 +02:00
AUTOMATIC
3d959f5b49 Merge remote-tracking branch 'missionfloyd/extra-network-preview-lazyload' into dev 2023-05-18 23:23:13 +03:00
AUTOMATIC1111
6837cf6a8d
Merge pull request #10520 from catboxanon/dev
Remove blinking effect from text in hires fix and scale resolution preview
2023-05-18 22:58:20 +03:00
AUTOMATIC
bd877d7b5a rework #10519 2023-05-18 22:49:00 +03:00
AUTOMATIC
2582a0fd3b make it possible for scripts to add cross attention optimizations
add UI selection for cross attention optimization
2023-05-18 22:48:28 +03:00
catboxanon
36791cb6af
Fix blinking text of hr and scale res
goodbye
2023-05-18 14:04:55 -04:00
AUTOMATIC1111
2e006fa500
Merge pull request #10519 from catboxanon/patch/hires-input-release-event
Improve width/height slider responsiveness
2023-05-18 20:32:21 +03:00
AUTOMATIC
b5a0c6da37 Revert "Merge pull request #10440 from grimatoma/increaseModelPickerWidth"
This reverts commit 4b07f2f584, reversing
changes made to 4071fa4a12.
2023-05-18 20:25:33 +03:00
catboxanon
57275da903
Reorder variable assignment 2023-05-18 13:25:32 -04:00
AUTOMATIC
92902e180e bump gradio 2023-05-18 20:25:07 +03:00
AUTOMATIC
ff0e17174f rework hires prompts/sampler code to among other things support different extra networks in first/second pass
rework quoting for infotext items that have commas in them to use json (should be backwards compatible except for cases where it didn't work previously)
add some locals from processing function into the Processing class as fields
2023-05-18 20:16:09 +03:00
catboxanon
63c02314cc
.change -> .release for hires input
Improves overall UI responsiveness.
2023-05-18 13:06:13 -04:00
AUTOMATIC
5ec2c294ee Merge remote-tracking branch 'InvincibleDude/improved-hr-conflict-test' into hires-fix-ext 2023-05-18 17:57:16 +03:00
AUTOMATIC1111
3885f8a63e
Merge pull request #10381 from AUTOMATIC1111/minor-fix
Minor fix
2023-05-18 17:51:58 +03:00
AUTOMATIC
44c37f94e1 add messages about Loras that failed to load to UI 2023-05-18 16:36:30 +03:00
AUTOMATIC
cd8a510ca9 if sd_model is None, do not always try to load it 2023-05-18 15:47:43 +03:00
Sakura-Luna
96cba45d71
Modify xformers instead of pytorch 2023-05-18 17:29:47 +08:00
AUTOMATIC
ae252cd5bc add --gradio-allowed-path commandline option 2023-05-18 10:37:25 +03:00
AUTOMATIC1111
7fd80951ad
Merge pull request #10465 from baptisterajaut/master
Bump pytorch to 2.0 for AMD Users on Linux
2023-05-18 10:26:57 +03:00
AUTOMATIC1111
97e1cf69c0
Merge branch 'dev' into master 2023-05-18 10:26:35 +03:00
AUTOMATIC
bb431df52b python linter fixes 2023-05-18 10:16:33 +03:00
AUTOMATIC
f9be4dc498 keep old option for ngrok 2023-05-18 10:14:04 +03:00
AUTOMATIC1111
b4b42de9d5
Merge pull request #10438 from bobzilladev/ngrok-py
Use ngrok-py library
2023-05-18 10:12:41 +03:00
AUTOMATIC1111
182330ae40
Merge branch 'dev' into ngrok-py 2023-05-18 10:12:17 +03:00
AUTOMATIC1111
983f2c494a
Merge pull request #10499 from dongweiming/error-improvement
Error improvement for install torch
2023-05-18 10:09:23 +03:00
AUTOMATIC
bb80eea9d4 eslint the merged code 2023-05-18 10:03:48 +03:00
AUTOMATIC
c08f229318 Merge branch 'eslint' into dev 2023-05-18 10:02:17 +03:00
AUTOMATIC
57b75f4a03 eslint related file edits 2023-05-18 09:59:10 +03:00
AUTOMATIC
f88169a9e7 extend eslint config 2023-05-18 09:58:49 +03:00
Weiming
aa6e98e43c Error Improvement for install torch 2023-05-18 13:25:48 +08:00
AUTOMATIC1111
1ceb82bc74
Merge pull request #8665 from Vespinian/fix_img2img_scriptrunner_for_gui
fix [Bug]: Changed gui's img2img p.scripts from scripts_txt2img to scripts_img2img
2023-05-18 00:05:01 +03:00
AUTOMATIC
3694379f26 rework #8863 to work with all img2img tabs 2023-05-18 00:03:16 +03:00
AUTOMATIC
973ae87309 Merge remote-tracking branch 'pieresimakp/img2img-detect-image-size' into dev 2023-05-17 23:49:39 +03:00
AUTOMATIC
61ee563df9 option to specify editor height for img2img 2023-05-17 23:42:01 +03:00
AUTOMATIC
e5dd4b4ebf remove some code duplication from #9348 2023-05-17 23:27:06 +03:00
AUTOMATIC1111
1d1b5da4bf
Merge pull request #9348 from space-nuko/improve-frontend-responsiveness
Improve frontend responsiveness for some buttons
2023-05-17 23:19:08 +03:00
AUTOMATIC1111
04b4508a66
Merge branch 'dev' into improve-frontend-responsiveness 2023-05-17 23:18:56 +03:00
AUTOMATIC
b397f63e00 add option to reorder tabs
fix Reload UI not working
2023-05-17 23:11:33 +03:00
AUTOMATIC
30410fd355 simplify name pattern setting tooltips 2023-05-17 22:54:45 +03:00
AUTOMATIC1111
6a13c416f6
Merge pull request #10222 from AUTOMATIC1111/readme-simple-installation-method
add documentation for simple installation method using release package
2023-05-17 22:51:17 +03:00
AUTOMATIC
ad3a7f2ab9 alternative solution to fix styles load when edited by human #9765 as suggested by akx 2023-05-17 22:50:08 +03:00
AUTOMATIC
f6fc7916c4 add /sdapi/v1/script-info api 2023-05-17 22:43:24 +03:00
AUTOMATIC
8fe9ea7f4d add options to show/hide hidden files and dirs, and to not list models/files in hidden directories 2023-05-17 21:45:26 +03:00
AUTOMATIC
a6b618d072 use a single function for saving images with metadata both in extra networks and main mode for #10395 2023-05-17 21:03:41 +03:00
AUTOMATIC1111
9c91a86720
Merge pull request #10395 from wk5ovc/patch-4
Fix extra networks save preview image geninfo
2023-05-17 20:42:37 +03:00
AUTOMATIC1111
6b51cc7530
Merge pull request #10400 from AUTOMATIC1111/Sakura-Luna-patch-1
Add Python version
2023-05-17 20:34:45 +03:00
AUTOMATIC
f6a622bcef isn't there something you forgot, #10483? 2023-05-17 20:27:48 +03:00
AUTOMATIC1111
987c1f7d9f
Merge pull request #10483 from Iheuzio/syntax-search
Fix typo in syntax
2023-05-17 20:27:14 +03:00
AUTOMATIC
9fd6c1e343 move some settings to the new Optimization page
add slider for token merging for img2img
rework StableDiffusionProcessing to have the token_merging_ratio field
fix a bug with applying png optimizations for live previews when they shouldn't be applied
2023-05-17 20:22:54 +03:00
Iheuzio
f5092164e8 Fix typo in syntax 2023-05-17 12:51:54 -04:00
AUTOMATIC1111
f6c06e3ed2
Merge pull request #10458 from akx/graceful-stop
Graceful server stopping
2023-05-17 18:45:40 +03:00
AUTOMATIC
216b0fa6c9 when adding tooltips, do not scan whole document and instead only scan added elements 2023-05-17 18:26:53 +03:00
AUTOMATIC1111
3c81d184c0
Merge pull request #10414 from AUTOMATIC1111/xyz-token-merging
xyz token merging
2023-05-17 18:06:55 +03:00
AUTOMATIC
76ebf750a4 use a local variable instead of dictionary entry for sd_merge_models in merge model metadata code 2023-05-17 17:44:07 +03:00
AUTOMATIC1111
36c14831b3
Merge pull request #10473 from dongweiming/fix-10460
Fix #10460
2023-05-17 17:42:25 +03:00
Weiming
95cb492e41 Fixed: #10460 2023-05-17 22:35:59 +08:00
AUTOMATIC
f8ca37b903 fix inability to run with --freeze-settings 2023-05-17 17:07:11 +03:00
Aarni Koskela
9c54b78d9d Run eslint --fix (and normalize tabs to spaces) 2023-05-17 16:09:06 +03:00
Aarni Koskela
4f11f285f9 Add ESLint to CI 2023-05-17 16:09:06 +03:00
Aarni Koskela
13f4c62ba3 Add basic ESLint configuration for formatting
This doesn't enable any of ESLint's actual possible-issue linting,
but just style normalization based on the Prettier configuration (but without line length limits).
2023-05-17 16:09:06 +03:00
AUTOMATIC1111
b4703b788b
Merge pull request #10461 from akx/processed-s-min-uncond
Copy s_min_uncond to Processed
2023-05-17 15:08:14 +03:00
AUTOMATIC
1210548cba simplify single_sample_to_image 2023-05-17 14:53:39 +03:00
AUTOMATIC1111
875ccc27f6
Merge pull request #10467 from Sakura-Luna/taesd-a
Tiny AE fix
2023-05-17 14:45:38 +03:00
Sakura-Luna
7a13a3f4ba TAESD fix 2023-05-17 17:39:07 +08:00
Baptiste Rajaut
484948f5c0
Fixing webui.sh
If only i proofread what i wrote
2023-05-17 11:10:57 +02:00
Baptiste Rajaut
b3397c2492
Bump pytorch for AMD Users
So apparently it works now? Before you would get "Pytorch cant use the GPU" but not anymore.
2023-05-17 11:01:33 +02:00
Aarni Koskela
315f109427 Copy s_min_uncond to Processed
Should fix #10416
2023-05-17 10:26:32 +03:00
Aarni Koskela
875990a232 Add option for /_stop route (for graceful shutdown) 2023-05-17 10:15:08 +03:00
Aarni Koskela
85b4f89926 Replace state.need_restart with state.server_command + replace poll loop with signal 2023-05-17 10:15:03 +03:00
AUTOMATIC1111
9ac85b8b73
Merge pull request #10365 from Sakura-Luna/taesd-a
Add Tiny AE live preview
2023-05-17 09:26:50 +03:00
AUTOMATIC1111
85232a5b26
Merge branch 'dev' into taesd-a 2023-05-17 09:26:26 +03:00
AUTOMATIC
56a2672831 return live preview defaults to how they were
only download TAESD model when it's needed
return calculations in single_sample_to_image to just if/elif/elif blocks
keep taesd model in its own directory
2023-05-17 09:24:01 +03:00
AUTOMATIC
b217ebc490 add credits 2023-05-17 08:41:21 +03:00
AUTOMATIC1111
4b07f2f584
Merge pull request #10440 from grimatoma/increaseModelPickerWidth
Remove max width for model dropdown
2023-05-17 08:27:25 +03:00
AUTOMATIC1111
4071fa4a12
Merge pull request #10451 from dennissheng/master
not clear checkpoints cache when config changes
2023-05-17 08:24:56 +03:00
AUTOMATIC1111
0003b29044
Merge pull request #10452 from dongweiming/fix-neg
Fix remove `textual inversion` prompt
2023-05-17 08:18:54 +03:00
dennissheng
54f657ffbc not clear checkpoints cache when config changes 2023-05-17 10:47:02 +08:00
Weiming
e378590d33 Fix remove textual inversion prompt 2023-05-17 10:20:11 +08:00
grimatoma
a4d5fdd3c2 Remove max width for model dropdown
Removing the max width for the model dropdown allows the user to see the full name of a model especially when it is long.
Model names are getting more complex and longer and the current width almost always cuts off model names.
If a user leverages folders than it pretty much always cuts off the name...
2023-05-16 13:32:32 -07:00
bobzilladev
0d31f20cbd Use ngrok-py library 2023-05-16 16:09:41 -04:00
lenankamp
bbce167305
Recursive batch img2img.py
Searches sub directories and performs img2img batch processing, also limits inputs to jpg, webp, and png. Then saves to putput directory with relative paths.
2023-05-16 14:37:45 -04:00
Sakura-Luna
4fb2cc0f06 Minor change 2023-05-17 00:32:32 +08:00
AUTOMATIC
ce38ee8f26 add info link for Negative Guidance minimum sigma 2023-05-16 15:41:49 +03:00
AUTOMATIC
6302978ff8 restore nqsp in footer that was lost during linting 2023-05-16 15:14:44 +03:00
AUTOMATIC
a61cbef02c add second_order field to sampler config 2023-05-16 12:36:15 +03:00
AUTOMATIC
cdac5ace14 suppress ENSD infotext for samplers that don't use it 2023-05-16 11:54:02 +03:00
AUTOMATIC
3d76eabbca add visual progress for extension installation from URL 2023-05-16 07:59:43 +03:00
AUTOMATIC
a47abe1b7b update extensions table: show branch, show date in separate column, and show version from tags if available 2023-05-15 21:22:35 +03:00
AUTOMATIC
0d2a4b608c load extensions' git metadata in parallel to loading the main program to save a ton of time during startup 2023-05-15 20:57:11 +03:00
AUTOMATIC
0d3a80e269 Show "Loading..." for extra networks when displaying for the first time 2023-05-15 20:33:44 +03:00
w-e-w
9e90907532 xyz token merging 2023-05-16 02:02:51 +09:00
Sakura-Luna
32af211f4c
Add Python version
Many users still use unverified versions of Python and file version-specific issues, often without mentioning version information, making troubleshooting difficult.
2023-05-15 15:42:37 +08:00
Keith
f517838c75
Fix extra networks save preview image geninfo 2023-05-15 10:47:01 +08:00
Sakura-Luna
742da31932
Minor changes 2023-05-15 03:04:34 +08:00
Sakura-Luna
9a9557ecfc
Change to extra-index-url 2023-05-15 03:00:23 +08:00
Sakura-Luna
38583be7af
Revert Gradio version 2023-05-15 02:37:43 +08:00
AUTOMATIC1111
f6a2a98f1a
Merge pull request #10379 from AUTOMATIC1111/Sakura-Luna-patch-1
Add GPU device
2023-05-14 21:18:55 +03:00
AUTOMATIC1111
d7d378eda1
Merge pull request #10384 from akx/no-shell
launch.py: Don't involve shell for running Python or getting Git output
2023-05-14 21:18:09 +03:00
Aarni Koskela
d9968e6108 launch.py: Don't involve shell for running Python or Git for output
Fixes Linux regression in 451d255b58
2023-05-14 20:39:19 +03:00
AUTOMATIC1111
1b7e787733
Merge pull request #10382 from AUTOMATIC1111/fix_xyz_checkpoint
fix xyz checkpoint
2023-05-14 19:01:36 +03:00
w-e-w
a98ae89bde fix xyz checkpoint 2023-05-15 00:31:34 +09:00
Sakura-Luna
b023940032
Update bug_report.yml 2023-05-14 22:39:38 +08:00
Sakura-Luna
f29c41bf6d
Modify pytorch command 2023-05-14 22:29:28 +08:00
Sakura-Luna
ef046fae39
Downgrade Gradio 2023-05-14 22:26:43 +08:00
Sakura-Luna
efe81620a0
Add GPU device
Add GPU option to troubleshoot.
2023-05-14 22:17:36 +08:00
AUTOMATIC
7001e1ed61 Merge branch 'master' into dev 2023-05-14 13:36:16 +03:00
AUTOMATIC
89f9faa633 Merge branch 'release_candidate' 2023-05-14 13:35:07 +03:00
AUTOMATIC
dbd13dee3a update readme for release 2023-05-14 13:34:50 +03:00
AUTOMATIC
b9abdb50a3 add a possible fix for 'LatentDiffusion' object has no attribute 'lora_layer_mapping' 2023-05-14 13:31:03 +03:00
AUTOMATIC
1a43524018 fix model loading twice in some situations 2023-05-14 13:27:50 +03:00
AUTOMATIC1111
5f5435eb1a
Merge pull request #10218 from micky2be/find_vae
Files in vae folder with same name as a checkpoint can be found too
2023-05-14 11:46:36 +03:00
AUTOMATIC1111
80adb6979d
Merge branch 'dev' into find_vae 2023-05-14 11:46:27 +03:00
AUTOMATIC1111
3ddc763422
Merge pull request #10367 from AUTOMATIC1111/jpeg-extra-network-preview
allow jpeg for extra network preview
2023-05-14 11:40:03 +03:00
AUTOMATIC
a58ae0b717 remove auto live previews format option, fix slow PNG generation 2023-05-14 11:15:15 +03:00
AUTOMATIC
a00e42556f add a bunch of descriptions and reword a lot of settings (sorry, localizers) 2023-05-14 11:04:21 +03:00
w-e-w
a423f23d28 allow jpeg for extra network preview 2023-05-14 16:22:40 +09:00
AUTOMATIC
ce515b81c5 set up a system to provide extra info for settings elements in python rather than js
add a bit of spacing/styling to settings elements
add link info for token merging
2023-05-14 10:02:51 +03:00
Sakura-Luna
bd9b9d425a Add live preview mode check 2023-05-14 14:06:02 +08:00
Sakura-Luna
e14b586d04 Add Tiny AE live preview 2023-05-14 14:06:01 +08:00
AUTOMATIC
2cfaffb239 updates for #9256 2023-05-14 08:30:37 +03:00
AUTOMATIC1111
7f6ef764b9
Merge pull request #9256 from papuSpartan/tomesd
Integrate optional speed and memory improvements by token merging (via dbolya/tomesd)
2023-05-14 08:21:02 +03:00
AUTOMATIC
005849331e remove output_altered flag from AfterCFGCallbackParams 2023-05-14 08:15:22 +03:00
AUTOMATIC1111
cb9a3a7809
Merge pull request #10357 from catboxanon/sag
Add/modify CFG callbacks for Self-Attention Guidance extension
2023-05-14 08:06:45 +03:00
AUTOMATIC1111
4051d51caf
Merge pull request #10292 from akx/smol-bump
Bump some versions to avoid downgrading them
2023-05-14 07:59:28 +03:00
Sakura-Luna
8abfc95013
Update script_callbacks.py 2023-05-14 12:56:34 +08:00
catboxanon
3078001439 Add/modify CFG callbacks
Required by self-attn guidance extension
https://github.com/ashen-sensored/sd_webui_SAG
2023-05-14 01:49:41 +00:00
AUTOMATIC
d7e9ac2aff update readme 2023-05-13 20:47:32 +03:00
AUTOMATIC1111
86ff43b930 Merge pull request #10335 from akx/l10n-dis-take-2
Localization fixes
2023-05-13 20:46:50 +03:00
AUTOMATIC
e8eea1bb7a Merge branch 'release_candidate' into dev 2023-05-13 20:26:13 +03:00
AUTOMATIC
2053745c8f Merge branch 'v1.2.0-hotfix' into release_candidate 2023-05-13 20:25:03 +03:00
AUTOMATIC
27f7fbf35c update readme 2023-05-13 20:24:48 +03:00
AUTOMATIC1111
12c78138dd Merge pull request #10324 from catboxanon/offline
Allow web UI to be ran fully offline
2023-05-13 20:22:09 +03:00
AUTOMATIC1111
063848798c Merge pull request #10339 from catboxanon/bf16
Allow bf16 in safe unpickler
2023-05-13 20:21:39 +03:00
AUTOMATIC
7e3539df6f fix upscalers disappearing after the user reloads UI 2023-05-13 20:21:11 +03:00
AUTOMATIC
477199357f add an option to always refer to lora by filenames
never refer to lora by an alias if multiple loras have same alias or the alias is called none
2023-05-13 20:15:37 +03:00
AUTOMATIC1111
d70c3a807b
Merge pull request #10339 from catboxanon/bf16
Allow bf16 in safe unpickler
2023-05-13 19:45:18 +03:00
AUTOMATIC1111
cdb1ffb2f4
Merge pull request #10335 from akx/l10n-dis-take-2
Localization fixes
2023-05-13 19:44:55 +03:00
AUTOMATIC1111
23b62afc72
Merge pull request #10324 from catboxanon/offline
Allow web UI to be ran fully offline
2023-05-13 19:43:15 +03:00
Aarni Koskela
cd6990c243 Make dump translations work again 2023-05-13 19:22:39 +03:00
Aarni Koskela
1f57b948b7 Move localization to its own script block and load it first 2023-05-13 19:15:13 +03:00
papuSpartan
c2fdb44880 fix for img2img 2023-05-13 11:11:02 -05:00
papuSpartan
917faa5325 move to stable-diffusion tab 2023-05-13 10:26:09 -05:00
papuSpartan
ac83627a31 heavily simplify 2023-05-13 10:23:42 -05:00
catboxanon
cb5f61281a
Allow bf16 in safe unpickler 2023-05-13 11:04:26 -04:00
papuSpartan
55e52c878a remove command line option 2023-05-13 09:24:56 -05:00
catboxanon
867c8a1083 minor fix 2023-05-13 12:59:00 +00:00
catboxanon
5afc44aab1 Requested changes 2023-05-13 12:57:32 +00:00
Aarni Koskela
999a03e4a7 Wait for DOMContentLoaded until checking whether localization should be disabled
Refs https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/9955#issuecomment-1546587143
2023-05-13 15:12:30 +03:00
AUTOMATIC
2b3fc246b0 Merge branch 'master' into dev 2023-05-13 08:19:21 +03:00
AUTOMATIC
4135937876 update changelog for release 2023-05-13 08:18:49 +03:00
AUTOMATIC
d274b8297e fix broken prompts from file 2023-05-13 08:18:49 +03:00
AUTOMATIC
b08500cec8 Merge branch 'release_candidate' 2023-05-13 08:16:37 +03:00
AUTOMATIC
231562ea13 update changelog for release 2023-05-13 08:16:20 +03:00
catboxanon
867be74244 Define default fonts for Gradio theme
Allows web UI to (almost) be ran fully offline.
The web UI will hang on load if offline when
these fonts are not manually defined, as it will attempt (and fail)
to pull from Google Fonts.
2023-05-12 18:08:34 +00:00
catboxanon
b14e23529f Redirect Gradio phone home request
This request is sent regardless of Gradio analytics being
enabled or not via the env var.
Idea from text-generation-webui.
2023-05-12 18:06:13 +00:00
AUTOMATIC1111
9080af56dd
Merge pull request #10321 from akx/fix-launch-git-get
launch.py: fix git_tag() & fix commit_hash() & simplify
2023-05-12 21:01:37 +03:00
AUTOMATIC1111
b4ad31ddd4
Merge pull request #10318 from brkirch/set-pytorch-201-mac
Set PyTorch version to 2.0.1 for macOS
2023-05-12 20:56:42 +03:00
Aarni Koskela
451d255b58 Get rid of check_run + run_python 2023-05-12 20:54:06 +03:00
Aarni Koskela
55d222a9f4 launch.py: make git_tag() and commit_hash() work even when WEBUI_LAUNCH_LIVE_OUTPUT 2023-05-12 20:54:06 +03:00
brkirch
0cab07b2f1 Set PyTorch version to 2.0.1 for macOS 2023-05-12 11:15:43 -04:00
AUTOMATIC1111
54c84e63b3
Merge pull request #10317 from AUTOMATIC1111/fix-COMMANDLINE_ARGS--data-dir
fix --data-dir ignored when launching via webui-user.bat COMMANDLINE_ARGS
2023-05-12 16:50:42 +03:00
w-e-w
681c16dd1e fix --data-dir for COMMANDLINE_ARGS
move reading of COMMANDLINE_ARGS into paths_internal.py so --data-dir can be properly read
2023-05-12 22:33:21 +09:00
papuSpartan
75b3692920 Merge branch 'dev' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into tomesd 2023-05-11 22:40:17 -05:00
Aarni Koskela
d4bd67bd67 Bump versions to avoid downgrading them 2023-05-11 23:12:43 +03:00
AUTOMATIC1111
abe32cefa3
Merge pull request #10285 from akx/ruff-spacing
Indentation + ruff whitespace fixes
2023-05-11 21:25:15 +03:00
AUTOMATIC1111
b4aaa339d5
Merge pull request #10290 from akx/smart-live-preview-format
Make live previews use JPEG only for large images
2023-05-11 21:24:44 +03:00
Aarni Koskela
da10de022f Make live previews use JPEG only when the image is lorge enough 2023-05-11 20:54:40 +03:00
Aarni Koskela
49a55b410b Autofix Ruff W (not W605) (mostly whitespace) 2023-05-11 20:29:11 +03:00
AUTOMATIC1111
ba7ae7b948
Merge pull request #10286 from catboxanon/patch/extra-networks-symlinks
Fix symlink scanning for extra networks
2023-05-11 19:47:15 +03:00
catboxanon
cb3f8ff59f Fix symlink scanning 2023-05-11 15:55:43 +00:00
Aarni Koskela
431bc5a297 Reindent utils_test with 4 spaces 2023-05-11 18:26:34 +03:00
Aarni Koskela
098d2fda52 Reindent autocrop with 4 spaces 2023-05-11 18:26:04 +03:00
AUTOMATIC
8ca50f8240 fix broken prompts from file 2023-05-11 14:49:14 +03:00
AUTOMATIC
483545252f fix broken prompts from file 2023-05-11 14:24:22 +03:00
AUTOMATIC
0bfaf613a8 put the star where it belongs 2023-05-11 13:31:56 +03:00
AUTOMATIC1111
fb366891ab
Merge pull request #10274 from akx/torch-cpu-for-tests
Use CPU Torch in CI, etc.
2023-05-11 12:50:00 +03:00
Aarni Koskela
5b592669f9 CI: use launch.py for dependencies too 2023-05-11 11:57:46 +03:00
Aarni Koskela
c702010e57 CI: use CPU wheel repo for PyTorch 2023-05-11 11:57:46 +03:00
Aarni Koskela
dd3ca9adf7 launch.py: make torch_index_url an envvar 2023-05-11 11:57:46 +03:00
Aarni Koskela
a09e1e6e18 launch.py: Use GitHub archive URLs for gfpgan, clip, openclip instead of git clones 2023-05-11 11:57:43 +03:00
Aarni Koskela
875bc27009 launch.py: Simplify run() 2023-05-11 11:57:41 +03:00
Aarni Koskela
49db24ce27 launch.py: Add debugging envvar to see install output 2023-05-11 11:57:36 +03:00
AUTOMATIC1111
4445314c68
Merge pull request #10273 from AUTOMATIC1111/roboto-without-a-dep
Vendor Roboto font
2023-05-11 11:25:23 +03:00
AUTOMATIC
87c3aa7389 return wrap_gradio_gpu_call to webui.py for extensions 2023-05-11 10:09:42 +03:00
Aarni Koskela
1332c46b71 Drop fonts + font-roboto deps since we only use the single regular cut of Roboto 2023-05-11 10:07:28 +03:00
Aarni Koskela
df7070eca2 Deduplicate get_font code 2023-05-11 10:06:19 +03:00
Aarni Koskela
16e4d79122 paths_internal: deduplicate modules_path 2023-05-11 10:05:39 +03:00
AUTOMATIC1111
3bb964d806
Merge pull request #10272 from AUTOMATIC1111/clean-fid
Update clean-fid to loosen transitive dependency pins
2023-05-11 09:36:08 +03:00
Sakura-Luna
1dcd672324
Update sd_vae.py
There is no need to use split.
2023-05-11 14:29:52 +08:00
Aarni Koskela
ef11c197b3 Update clean-fid to loosen transitive dependency pins
Diff: bd92e684ff...c8ffa420a3
2023-05-11 08:48:08 +03:00
AUTOMATIC1111
fe5d988947
Merge pull request #10268 from Sakura-Luna/pbar
UniPC progress bar adjustment
2023-05-11 08:16:36 +03:00
AUTOMATIC
b7e160a87d change live preview format to jpeg to prevent unreasonably slow previews for large images, and add an option to let user select the format 2023-05-11 08:14:45 +03:00
AUTOMATIC
e334758ec2 repair #10266 2023-05-11 07:45:05 +03:00
Sakura-Luna
ae17e97898 UniPC progress bar adjustment 2023-05-11 12:28:26 +08:00
AUTOMATIC1111
c9e5b92106
Merge pull request #10266 from nero-dv/dev
Update sub_quadratic_attention.py
2023-05-11 07:21:18 +03:00
Louis Del Valle
c8732dfa6f
Update sub_quadratic_attention.py
1. Determine the number of query chunks.
2. Calculate the final shape of the res tensor.
3. Initialize the tensor with the calculated shape and dtype, (same dtype as the input tensors, usually)

Can initialize the tensor as a zero-filled tensor with the correct shape and dtype, then compute the attention scores for each query chunk and fill the corresponding slice of tensor.
2023-05-10 22:05:18 -05:00
AUTOMATIC
8aa87c564a add UI to edit defaults
allow setting defaults for elements in extensions' tabs
fix a problem with ESRGAN upscalers disappearing after UI reload
implicit change: HTML element id for train tab from tab_ti to tab_train (will this break things?)
2023-05-10 23:41:08 +03:00
AUTOMATIC1111
5abecea34c
Merge pull request #10259 from AUTOMATIC1111/ruff
Ruff
2023-05-10 21:24:18 +03:00
AUTOMATIC
3ec7b705c7 suggestions and fixes from the PR 2023-05-10 21:21:32 +03:00
AUTOMATIC
d25219b7e8 manual fixes for some C408 2023-05-10 11:55:09 +03:00
AUTOMATIC
a5121e7a06 fixes for B007 2023-05-10 11:37:18 +03:00
AUTOMATIC
550256db1c ruff manual fixes 2023-05-10 11:19:16 +03:00
AUTOMATIC
028d3f6425 ruff auto fixes 2023-05-10 11:05:02 +03:00
AUTOMATIC
e42de4b8a2 update ruff config with more stuff 2023-05-10 11:00:07 +03:00
AUTOMATIC
57ef617251 integrate the PR's config 2023-05-10 09:09:41 +03:00
AUTOMATIC1111
837d3a94b7
Merge pull request #10233 from akx/fix-lint-ci
Replace pylint CI with ruff
2023-05-10 09:06:54 +03:00
AUTOMATIC
4b854806d9 F401 fixes for ruff 2023-05-10 09:02:23 +03:00
AUTOMATIC
f741a98bac imports cleanup for ruff 2023-05-10 08:43:42 +03:00
AUTOMATIC
96d6ca4199 manual fixes for ruff 2023-05-10 08:25:25 +03:00
AUTOMATIC
762265eab5 autofixes from ruff 2023-05-10 07:52:45 +03:00
AUTOMATIC
a617d64882 add ruff config 2023-05-10 07:43:55 +03:00
AUTOMATIC
f5ea1e9d92 bump torch version 2023-05-10 07:26:42 +03:00
AUTOMATIC
d50b95b5a3 fix an issue preventing the program from starting if the user specifies a bad gradio theme 2023-05-10 07:14:13 +03:00
AUTOMATIC
921dc4639b Merge branch 'dev' into release_candidate 2023-05-10 06:53:25 +03:00
AUTOMATIC
f07af8db64 bump gradio version for all suffering musicians 2023-05-10 06:52:51 +03:00
Aarni Koskela
990ca80cb6 Replace pylint CI with ruff 2023-05-09 23:13:47 +03:00
AUTOMATIC
c8791c1d37 Merge branch 'dev' into release_candidate 2023-05-09 22:42:37 +03:00
AUTOMATIC
31397986e7 changelog 2023-05-09 22:42:02 +03:00
AUTOMATIC1111
d6a9b22c19
Merge pull request #10232 from akx/eff
Fix up string formatting/concatenation to f-strings where feasible
2023-05-09 22:40:51 +03:00
AUTOMATIC1111
ccbb361845
Merge pull request #10209 from AUTOMATIC1111/quicksettings-migration
1.1.1 quicksettings list migration
2023-05-09 22:29:08 +03:00
Aarni Koskela
3ba6c3c83c Fix up string formatting/concatenation to f-strings where feasible 2023-05-09 22:25:39 +03:00
w-e-w
81bbe31d9f add documentation for simple installation method using release package 2023-05-10 00:04:36 +09:00
Micky Brunetti
749a93295e
remove logs 2023-05-09 15:43:58 +02:00
Micky Brunetti
7fd3a4e6d7
files in vae folder with same name as a checkpoint can be found too 2023-05-09 15:35:57 +02:00
AUTOMATIC1111
8fb16ceb28
Merge pull request #10214 from AUTOMATIC1111/refresh-fix
Refresh fix
2023-05-09 15:29:48 +03:00
Sakura-Luna
e7dbefc340
refresh fix 2023-05-09 19:06:00 +08:00
w-e-w
d1ff57e1cb 1.1.1 quicksettings list migration 2023-05-09 18:14:12 +09:00
AUTOMATIC
ad6ec02261 prevent Reload UI button/link from reloading the page when it's not yet ready 2023-05-09 11:42:47 +03:00
AUTOMATIC
eb95809501 rework loras api 2023-05-09 11:25:46 +03:00
AUTOMATIC1111
7e02a00c81
Merge pull request #10194 from DumoeDss/dev
Add api method to get LoRA models with prompt
2023-05-09 11:12:13 +03:00
AUTOMATIC
11ae5399f6 make it so that custom context menu from contextMenu.js only disappears after user's click, ignoring non-user click events 2023-05-09 10:52:14 +03:00
AUTOMATIC1111
ea05ddfec8
Merge pull request #10201 from brkirch/mps-nan-fixes
Fix MPS on PyTorch 2.0.1, Intel Macs
2023-05-09 10:28:24 +03:00
brkirch
de401d8ffb Fix generation with k-diffusion/UniPC on x64 Macs 2023-05-09 01:10:13 -04:00
brkirch
9efb809f7c Remove PyTorch 2.0 check
Apparently the commit in the main branch of pytorch/pytorch that fixes this issue didn't make it into PyTorch 2.0.1, and since it is unclear exactly which release will have it we'll just always apply the workaround so a crash doesn't occur regardless.
2023-05-09 01:10:13 -04:00
AUTOMATIC
2b96a7b694 add links to wiki for filename pattern settings
add extended info for quicksettings setting
2023-05-08 16:46:35 +03:00
AUTOMATIC
5edb0acfeb use multiselect for quicksettings (this also resets the existing setting) 2023-05-08 15:38:25 +03:00
Sayo
f9abe4cddc Add api method to get LoRA models with prompt 2023-05-08 20:38:10 +08:00
AUTOMATIC
fc966c0299 do not show licenses page when user selects Show all pages in settings 2023-05-08 15:30:32 +03:00
AUTOMATIC
eabea24eb8 put infotext options into their own category in settings tab 2023-05-08 15:26:23 +03:00
AUTOMATIC
ab4ab4e595 add version to infotext, footer and console output when starting 2023-05-08 15:23:49 +03:00
brkirch
7aab389d6f Fix for Unet NaNs 2023-05-08 08:16:56 -04:00
AUTOMATIC
505a10ad92 use file modification time instead of current time for #9760 2023-05-08 15:09:20 +03:00
AUTOMATIC1111
879ed5422c
Merge pull request #9760 from Sakura-Luna/refresh
Fix gallery not being refreshed correctly
2023-05-08 15:06:02 +03:00
AUTOMATIC1111
b3a44385b1
Merge pull request #10025 from acncagua/Upscaler_initialization
Initialize the upscalers
2023-05-08 15:03:59 +03:00
Sayo
34a82a345a Add api method to get LoRA models 2023-05-08 19:55:05 +08:00
AUTOMATIC
6a5901a3fd update changelog 2023-05-08 12:45:22 +03:00
AUTOMATIC
f62540b2d2 Revert "add mtime to served images in gallery to prevent cache from showing old images"
This reverts commit 669b518cbd.
2023-05-08 12:18:22 +03:00
AUTOMATIC
18fb2162a4 disable useless progress display when pasting infotext using the blur button 2023-05-08 12:17:36 +03:00
AUTOMATIC
ec0da07236 Lora: add an option to use old method of applying loras 2023-05-08 12:07:43 +03:00
AUTOMATIC
083dc3c76a directory hiding for extra networks: dirs starting with . will hide their cards on extra network tabs unless specifically searched for
create HTML for extra network pages only on demand
allow directories starting with . to still list their models for lora, checkpoints, etc
keep "search" filter for extra networks when user refreshes the page
2023-05-08 11:33:45 +03:00
AUTOMATIC1111
855f83f92c
Merge pull request #10041 from AUTOMATIC1111/print-exception-#9219
print PIL.UnidentifiedImageError
2023-05-08 09:12:56 +03:00
AUTOMATIC1111
d1d7dd2a44
Merge pull request #10067 from dongweiming/x-y-z-plot
Add extra `None` option for VAE
2023-05-08 09:03:59 +03:00
AUTOMATIC1111
fa21e6ae63
Merge pull request #9616 from Sakura-Luna/tooltip
Tooltip localization support
2023-05-08 09:01:33 +03:00
AUTOMATIC1111
73d956454f
Merge branch 'dev' into tooltip 2023-05-08 09:01:25 +03:00
AUTOMATIC1111
b15bbef798
Merge pull request #10089 from AUTOMATIC1111/LoraFix
Fix some Lora's not working
2023-05-08 08:45:26 +03:00
AUTOMATIC1111
67c884196d
Merge pull request #9955 from akx/noop-localization-unless-required
Make localization.js do nothing if there's no localization to do
2023-05-08 08:43:11 +03:00
AUTOMATIC
669b518cbd add mtime to served images in gallery to prevent cache from showing old images 2023-05-08 08:36:24 +03:00
AUTOMATIC
e4a66bb8e3 make lightbox properly display whole picture without cutting of parts when the picture is very wide. 2023-05-08 08:21:38 +03:00
AUTOMATIC1111
a6529a78c3
Merge pull request #10113 from missionfloyd/extras-thumbnails
Fix stretched thumbnails on extras tab
2023-05-08 08:21:24 +03:00
AUTOMATIC1111
0141ab1387
Merge pull request #10140 from Archeb/patch-1
style.css: Make the image in the ImageViewer be resized correctly
2023-05-08 08:15:36 +03:00
AUTOMATIC1111
66428667c5
Merge pull request #10146 from missionfloyd/gamepad-option
Fix gamepad navigation
2023-05-08 08:09:12 +03:00
AUTOMATIC1111
6ac33fe9d1
Merge pull request #10133 from AUTOMATIC1111/filename-pattern-denoising_strength
Filename pattern denoising strength
2023-05-08 08:04:02 +03:00
AUTOMATIC
160780283a put all code for /docs in same place and make it work properly with UI reloads 2023-05-08 07:57:17 +03:00
AUTOMATIC1111
064eda930c
Merge pull request #10168 from mouhao/master
Fix missing /docs endpoint in newer gradio versions
2023-05-08 07:47:06 +03:00
AUTOMATIC
2473bafa67 read infotext params from the other extension for Lora if it's not active 2023-05-08 07:28:30 +03:00
mouhao
0cb582b50c
Merge pull request #1 from mouhao/mouhao-patch-1
Update webui.py
2023-05-07 21:03:28 +08:00
mouhao
5427b7128d
Update webui.py
Fix missing /docs endpoint in newer gradio versions
Newer versions of gradio (>=3.27.1) have removed the /docs endpoint by default. This commit adds it back to enable accessing the API documentation.
2023-05-07 20:54:48 +08:00
AUTOMATIC
2cb3b0be1d if present, use Lora's "ss_output_name" field to refer to it in prompt 2023-05-07 08:25:34 +03:00
missionfloyd
85bd9b3d31 Work with multiple gamepads 2023-05-06 22:47:35 -06:00
missionfloyd
99f3bf07d2 gamepad repeat option 2023-05-06 22:16:51 -06:00
missionfloyd
cca5782d18 Improve joypad performance 2023-05-06 22:00:13 -06:00
missionfloyd
5cbc1c5d43
Fix spelling 2023-05-05 23:03:32 -06:00
missionfloyd
a46c23b10f Make gamepad navigation optional 2023-05-05 22:48:27 -06:00
蚊子
8462d07116
style.css: Make the image in the ImageViewer be resized correctly 2023-05-06 01:17:39 +01:00
w-e-w
381674739e add denoising strength filename pattern 2023-05-06 02:24:33 +09:00
w-e-w
cde0d642f3 add denoising strength filename pattern 2023-05-06 02:20:33 +09:00
missionfloyd
79a6c5a666
Fix stretched thumbnails on extras tab 2023-05-05 03:51:51 -06:00
Sakura-Luna
a3cdf9aaf8 Reopen image fix 2023-05-05 15:52:34 +08:00
Aarni Koskela
16f0739db0 Make localization.js do nothing if there's no localization to do 2023-05-04 20:18:01 +03:00
Leo Mozoloa
c3eced22fc Fix some Lora's not working 2023-05-04 16:14:33 +02:00
Sakura-Luna
8bc4a3a2a8 Refresh fix 2023-05-04 15:59:42 +08:00
Sakura-Luna
91a15dca80 Use a new way to solve webpage refresh 2023-05-04 14:38:15 +08:00
Sakura-Luna
5c66fedb64 Revert "Fix gallery not being refreshed correctly"
This reverts commit 2c24e09dfc.
2023-05-04 14:08:22 +08:00
Sakura-Luna
35e5916af9 Revert "Add img2img refreshed correctly"
This reverts commit 988dd02632.
2023-05-04 14:08:21 +08:00
Sakura-Luna
29e13867bf Revert "Refresh bug fix"
This reverts commit eff00413ae.
2023-05-04 14:08:20 +08:00
Acncagua Slt
1bebb50da9
No double calls will be made
Do not call load_upscalers in list_builtin_upscalers
2023-05-04 11:59:22 +09:00
papuSpartan
f0efc8c211 not being cast properly every time, swap to ints 2023-05-03 21:10:31 -05:00
Weiming Dong
251be61a80 Add extra None option for VAE 2023-05-04 07:59:52 +08:00
papuSpartan
e960781511 fix maximum downsampling option 2023-05-03 13:12:43 -05:00
papuSpartan
f08ae96115 resolve merge conflicts and swap to dev branch for now 2023-05-03 02:21:50 -05:00
w-e-w
14e55a3301 print PIL.UnidentifiedImageError 2023-05-03 14:28:59 +09:00
Acncagua Slt
efe98ca090
Initialize the upscalers
Add modelloader.load_upscalers to def initialize()
2023-05-03 00:44:16 +09:00
AUTOMATIC1111
335428c2c8
Merge pull request #9140 from yedpodtrzitko/yed/reuse-existing-venv
feat: use existing virtualenv if already active
2023-05-02 11:05:00 +03:00
AUTOMATIC
14b70aa97b revert unwanted change from #9865 2023-05-02 11:03:11 +03:00
AUTOMATIC1111
4b6808f6ed
Merge pull request #9865 from catalpaaa/subpath-support
add subpath support
2023-05-02 11:01:27 +03:00
AUTOMATIC
4499bead4c Merge branch 'master' into dev 2023-05-02 09:25:47 +03:00
AUTOMATIC
5ab7f213be fix an error that prevents running webui on torch<2.0 without --disable-safe-unpickle 2023-05-02 09:20:35 +03:00
AUTOMATIC
b1717c0a48 do not load wait for shared.sd_model to load at startup 2023-05-02 09:08:00 +03:00
catalpaaa
9eb5b3e90f
Merge branch 'experimental' into subpath-support 2023-05-01 11:59:21 -07:00
AUTOMATIC1111
696c338ee2
Merge pull request #9953 from akx/js-misc-fixes
Miscellaneous JS fixes
2023-05-01 14:39:52 +03:00
AUTOMATIC1111
50f63e2247
Merge branch 'dev' into js-misc-fixes 2023-05-01 14:39:46 +03:00
AUTOMATIC
72cd27a135 update changelog 2023-05-01 14:33:44 +03:00
AUTOMATIC
fe8a10d428 Merge branch 'release_candidate' 2023-05-01 14:27:53 +03:00
AUTOMATIC
b463b8a126 Merge branch 'release_candidate' into dev 2023-05-01 14:09:53 +03:00
AUTOMATIC1111
6fbd85dd0c
Merge pull request #9969 from AUTOMATIC1111/restore_progress_fix
restore_progress fix
2023-05-01 14:09:32 +03:00
AUTOMATIC
f57445f7c4 Merge branch 'release_candidate' into dev 2023-05-01 14:01:29 +03:00
w-e-w
33e6bc34ff restore_progress fix
id wrong type
2023-05-01 19:59:52 +09:00
AUTOMATIC
67f5c2abb0 make it impossible to press the restore progress button after pressing it once 2023-05-01 13:58:10 +03:00
AUTOMATIC
f15b7e52e3 Add a comment and partial fix for the issue when the inpaint UI is unresponsive after using it. 2023-05-01 13:47:46 +03:00
AUTOMATIC
74d249f6dd Merge branch 'release_candidate' into dev 2023-05-01 12:48:28 +03:00
AUTOMATIC
94754c60c5 attempt to fix broken github CI 2023-05-01 12:47:52 +03:00
AUTOMATIC1111
a7aa046016
Merge pull request #9965 from AUTOMATIC1111/xyz_checkpoint_override
XYZ checkpoint switch via Override
2023-05-01 12:47:30 +03:00
AUTOMATIC1111
97b9800bc6
Merge pull request #9958 from AUTOMATIC1111/model_override_enhancement
override setting "model override" enhancement
2023-05-01 12:46:09 +03:00
w-e-w
cfbe68184c use override to apply checkpoint 2023-05-01 17:47:31 +09:00
w-e-w
0d1ef296b9 checkpoint override enhancement 2023-05-01 05:22:53 +09:00
Aarni Koskela
c714300265 Use substring instead of deprecated substr 2023-04-30 22:26:11 +03:00
Aarni Koskela
4bb441bb08 Remove redundant return 2023-04-30 22:26:11 +03:00
Aarni Koskela
b7269f781c Mark Notification.requestPermission's retval as purposely ignored 2023-04-30 22:26:11 +03:00
Aarni Koskela
f6a40a2ffa Fix unused variables 2023-04-30 22:26:11 +03:00
Aarni Koskela
8ccc27127b Fix a whole bunch of implicit globals 2023-04-30 22:08:52 +03:00
Aarni Koskela
34a6ad80d5 Use classList.toggle wherever possible 2023-04-30 14:48:02 +03:00
Aarni Koskela
ee973dcf1d imageMaskFix.js: fix event listeners to not use anonymous trampoline 2023-04-30 14:46:03 +03:00
Aarni Koskela
13d8d65ef9 hints: don't process elements that already have a title 2023-04-30 14:46:03 +03:00
AUTOMATIC
5e4a0e3d24 attempt to fix broken github CI 2023-04-29 23:02:23 +03:00
AUTOMATIC
06b6d2f2e2 Merge branch 'dev' into release_candidate 2023-04-29 22:50:49 +03:00
AUTOMATIC
ab287682bf add changelog 2023-04-29 22:50:34 +03:00
AUTOMATIC
e23063610f Merge branch 'dev' into release_candidate 2023-04-29 22:23:21 +03:00
AUTOMATIC
cd7f2b19f4 increase extra networks UI height to fit two rows of cards. 2023-04-29 22:17:32 +03:00
AUTOMATIC
c48ab36cb9 alternate restore progress button implementation 2023-04-29 22:16:54 +03:00
AUTOMATIC
bd9700405a Revert "Merge pull request #7595 from siutin/feature/restore-progress"
This reverts commit 80987c36f9, reversing
changes made to 2e78e65a22.
2023-04-29 22:15:20 +03:00
AUTOMATIC1111
80987c36f9
Merge pull request #7595 from siutin/feature/restore-progress
restore the progress from session lost / tab reload
2023-04-29 22:13:48 +03:00
AUTOMATIC1111
15c4e78b44
Merge branch 'dev' into feature/restore-progress 2023-04-29 22:13:40 +03:00
AUTOMATIC1111
2e78e65a22
Merge pull request #9907 from garrettsutula/master
Add disable_tls_verify arg for use with self-signed certs
2023-04-29 20:29:38 +03:00
AUTOMATIC1111
0d32cb2cf5
Merge branch 'dev' into master 2023-04-29 20:29:23 +03:00
AUTOMATIC
90e4659822 bump gradio to 3.28.1 2023-04-29 20:28:30 +03:00
AUTOMATIC
f9253cee66 do not fail all Loras if some have failed to load when making a picture 2023-04-29 20:28:30 +03:00
AUTOMATIC1111
3baeefd30a
Merge pull request #9933 from w-e-w/dev
add missing filename pattern hints
2023-04-29 20:13:58 +03:00
AUTOMATIC1111
45371704f6
Merge pull request #7632 from papuSpartan/gamepad
Image viewer scrolling via analog stick
2023-04-29 19:43:34 +03:00
AUTOMATIC
e40b2d947d change gradio callback from change to release in a bunch of places now that it's fixed in gradio 2023-04-29 19:39:22 +03:00
AUTOMATIC
a95dc02535 remove unwanted changes from #8789 2023-04-29 19:05:43 +03:00
AUTOMATIC1111
f96e6fbd0c
Merge pull request #8789 from Rucadi/master
Add polling and reload callback for extensions.
2023-04-29 19:03:10 +03:00
AUTOMATIC1111
0e0e70c273
Merge pull request #8924 from kurilee/master
Add option "keep original size" to textual inversion images preprocess
2023-04-29 18:51:12 +03:00
AUTOMATIC1111
b615a2ed11
Merge pull request #9108 from AUTOMATIC1111/img2img-scale-by
add "resize by" and "resize to" tabs to img2img
2023-04-29 18:21:28 +03:00
AUTOMATIC1111
eabecc21ec
Update modules/ui.py
Co-authored-by: missionfloyd <missionfloyd@users.noreply.github.com>
2023-04-29 18:20:11 +03:00
AUTOMATIC1111
103fc062a5
Merge pull request #8999 from Reibies/patch-1
Changed: extra network height css
2023-04-29 18:18:43 +03:00
AUTOMATIC
e30502a64b Remove NVidia URL for OSX torch installation on OSX. 2023-04-29 18:00:51 +03:00
AUTOMATIC
1a50272e7c revert some questionable changes from #9159 2023-04-29 17:45:22 +03:00
AUTOMATIC1111
f685fe7250
Merge pull request #9159 from space-nuko/ui-config-tabs
Make selected tab configurable with UI config
2023-04-29 17:43:07 +03:00
AUTOMATIC1111
88c7debb02
Merge branch 'dev' into ui-config-tabs 2023-04-29 17:42:57 +03:00
AUTOMATIC1111
97167a5768
Merge pull request #9165 from Ming424/update-readme
Small update for readme
2023-04-29 17:39:02 +03:00
AUTOMATIC1111
609b8933a2
Merge pull request #9258 from wywywywy/bug-outpainting-mk2-file-format
bug: Outpainting Mk2 & Poorman should use the SAMPLE file format to save images, not GRID file format
2023-04-29 17:23:22 +03:00
AUTOMATIC1111
5524301ab8
Merge pull request #9169 from space-nuko/extension-settings-backup
Extension settings backup/restore feature
2023-04-29 17:22:42 +03:00
AUTOMATIC1111
78d0ee3bba
Merge branch 'dev' into extension-settings-backup 2023-04-29 17:22:24 +03:00
AUTOMATIC1111
c018eefe91
Merge pull request #8563 from ParityError/master
Update webui.sh
2023-04-29 17:16:57 +03:00
AUTOMATIC1111
1185bf3981
Merge branch 'dev' into master 2023-04-29 17:16:52 +03:00
AUTOMATIC1111
8987764395
Merge pull request #9312 from space-nuko/save-merge-recipe
Embed model merge metadata in .safetensors file
2023-04-29 17:15:01 +03:00
AUTOMATIC1111
31dbec6b76
Merge pull request #9315 from GeorgLegato/get_uiCurrentTab_Gr3.23_wrong
get_uiCurrentTab() wrong with Gradio 3.23.0
2023-04-29 17:09:04 +03:00
AUTOMATIC
1bab1797c0 use parsed commandline args for --skip-install 2023-04-29 17:07:21 +03:00
AUTOMATIC1111
ad7fd488bc
Merge pull request #9330 from micky2be/patch-1
Fix skip-install bug (see #8935)
2023-04-29 17:04:48 +03:00
AUTOMATIC1111
ce64cab397
Merge branch 'dev' into patch-1 2023-04-29 17:04:37 +03:00
AUTOMATIC1111
c89cad2b9a
Merge pull request #9314 from Pluventi/master
Fix "Bug batch process"  on extras tab , even with a clean install of "stable diffusion webui"
2023-04-29 17:02:24 +03:00
AUTOMATIC1111
3894609b52
Merge branch 'dev' into master 2023-04-29 17:02:14 +03:00
AUTOMATIC1111
17cce45613
Merge pull request #8948 from hitomi/master
Fix --realesrgan-models-path and --ldsr-models-path not working
2023-04-29 17:00:24 +03:00
AUTOMATIC1111
f2af6dad71
Merge pull request #9351 from nart4hire/fix-ngrok-recreate-tunnel
Fix Ngrok recreating tunnels every reload
2023-04-29 16:56:20 +03:00
AUTOMATIC1111
78b5bed374
Merge pull request #9407 from GoulartNogueira/master
Fix orientation bug on preprocess
2023-04-29 16:53:17 +03:00
AUTOMATIC1111
1142a87c6a
Merge pull request #9219 from Z-nonymous/master
Fix #9185
2023-04-29 16:52:39 +03:00
AUTOMATIC1111
579e13df7c
Merge pull request #8847 from space-nuko/remove-watermark-option
Remove "do not add watermark to images" option
2023-04-29 16:50:58 +03:00
AUTOMATIC1111
263f0fb59c
Merge branch 'dev' into remove-watermark-option 2023-04-29 16:50:52 +03:00
AUTOMATIC
faff08f396 rework [batch_number]/[generation_number] filename patterns 2023-04-29 16:48:43 +03:00
w-e-w
7749f2d8ad hints [batch_number] [generation_number] 2023-04-29 22:47:38 +09:00
AUTOMATIC1111
8651943cf9
Merge pull request #9445 from gakada/master
Add [batch_number] and [generation_number] filename patterns
2023-04-29 16:41:19 +03:00
AUTOMATIC1111
e7d624574d
Merge branch 'dev' into master 2023-04-29 16:41:01 +03:00
AUTOMATIC
7428fb5176 add is_hr_pass field for processing 2023-04-29 16:28:51 +03:00
AUTOMATIC1111
e10ee96272
Merge pull request #9334 from gmasil/prepare-simpler-docker-integration-for-styles-csv
allow styles.csv to be symlinked or mounted in docker
2023-04-29 16:14:55 +03:00
AUTOMATIC1111
dae82c69a6
Merge pull request #9365 from bluelovers/pr/xyz-sort-001
feat(xyz): try sort Checkpoint name values
2023-04-29 16:10:20 +03:00
AUTOMATIC1111
725a3849d2
Merge branch 'dev' into pr/xyz-sort-001 2023-04-29 16:10:16 +03:00
AUTOMATIC
8863b31d83 use correct images for previews when using AND (see #9491) 2023-04-29 16:06:20 +03:00
AUTOMATIC
737b73a820 some extra lines I forgot to add for previous commit 2023-04-29 16:05:20 +03:00
AUTOMATIC
1d11e89698 rework Negative Guidance minimum sigma to work with AND, add infotext and copypaste parameters support 2023-04-29 15:57:09 +03:00
w-e-w
720fc88273 hints [clip_skip] 2023-04-29 20:55:01 +09:00
AUTOMATIC1111
3591eefedf
Merge pull request #9177 from devNegative-asm/master
(Optimization) Option to remove negative conditioning at low sigma values
2023-04-29 14:38:25 +03:00
AUTOMATIC1111
7b02b17a01
Merge pull request #9404 from DGdev91/master
Forcing PyTorch version for AMD GPUs automatic install
2023-04-29 14:09:51 +03:00
AUTOMATIC1111
967fb51df2
Merge branch 'dev' into master 2023-04-29 14:09:45 +03:00
AUTOMATIC1111
fdac486835
Merge pull request #9484 from infinitewarp/sort-embeddings
sort embeddings by name (case insensitive)
2023-04-29 14:03:02 +03:00
AUTOMATIC
cb940a583d fix extension installation broken by #9518 2023-04-29 13:45:14 +03:00
AUTOMATIC1111
376e99f681
Merge pull request #9592 from liamkerr/generation_params_fix
Fixed generation params in gallery
2023-04-29 13:24:58 +03:00
AUTOMATIC1111
43dd2378af
Merge branch 'dev' into generation_params_fix 2023-04-29 13:24:50 +03:00
AUTOMATIC1111
32c3b97669
Merge pull request #9628 from catboxanon/patch/9092
Fix image mask/composite for weird resolutions
2023-04-29 13:21:32 +03:00
AUTOMATIC1111
43925add0a
Merge pull request #9643 from tqwuliao/Branch_AddNewFilenameGen
Add new FilenameGenerator replacements [hasprompt<prompt1|default><prompt2>..]
2023-04-29 13:10:51 +03:00
AUTOMATIC1111
87535fcf29
Merge branch 'dev' into Branch_AddNewFilenameGen 2023-04-29 13:10:46 +03:00
AUTOMATIC1111
1ffb44b0b2
Merge pull request #9593 from gakada/tcmalloc
Try using TCMalloc on Linux by default
2023-04-29 13:02:00 +03:00
AUTOMATIC1111
e847df7ee9
Merge pull request #9609 from akx/bracket-fix
prompt-bracket-checker: Simplify code + improve error reporting
2023-04-29 12:58:20 +03:00
AUTOMATIC1111
d6a3988b86
Merge pull request #9669 from catboxanon/patch/sampler-schedule-fix
Fix prompt schedule for second order samplers
2023-04-29 12:56:57 +03:00
AUTOMATIC1111
fc6eeda69c
Merge pull request #9130 from Vespinian/fix-api-alwayson_scripts-less-then-requiered-args
[Fix] Prevent alwayson_scripts args param resizing script_arg list when they are inserted in it
2023-04-29 12:55:19 +03:00
AUTOMATIC1111
1dc21d7950
Merge pull request #9677 from weidongkl/master
fix install_dir error
2023-04-29 12:53:11 +03:00
AUTOMATIC
b06205eaf6 Allow user input for gradio theme selection 2023-04-29 12:52:09 +03:00
AUTOMATIC1111
e018c8a391
Merge pull request #8945 from space-nuko/gradio-theme-support
Support Gradio's theme API
2023-04-29 12:45:50 +03:00
AUTOMATIC1111
e6cbfcfe5b
Merge branch 'dev' into gradio-theme-support 2023-04-29 12:45:43 +03:00
AUTOMATIC1111
2c935d8eb0
Merge pull request #9518 from yike5460/master
add branch support for extension installation
2023-04-29 12:41:30 +03:00
AUTOMATIC
aee6d9bb74 remove unneeded warning filter 2023-04-29 12:39:05 +03:00
AUTOMATIC
ee71eee181 stuff related to torch version change 2023-04-29 12:36:50 +03:00
AUTOMATIC1111
9eb49b04e3
Merge pull request #9191 from vladmandic/torch
update torch base environment
2023-04-29 11:59:12 +03:00
AUTOMATIC1111
f54cd3f158
Merge branch 'dev' into torch 2023-04-29 11:58:54 +03:00
AUTOMATIC1111
e55cb92067
Merge pull request #9737 from AdjointOperator/master
add tiled inference support for ScuNET
2023-04-29 11:34:35 +03:00
AUTOMATIC
5fe0dd79be rename CPU RNG to RNG source in settings, add infotext and parameters copypaste support to RNG source 2023-04-29 11:29:37 +03:00
AUTOMATIC1111
cb9571e37f
Merge pull request #9734 from deciare/cpu-randn
Option to make images generated from a given manual seed consistent across CUDA and MPS devices
2023-04-29 11:16:06 +03:00
AUTOMATIC1111
09069918e8
Merge pull request #9750 from TFWol/patch-1
Remove old code roll random artists
2023-04-29 11:15:26 +03:00
AUTOMATIC1111
71322400fd
Merge pull request #9392 from pangbo13/xyz-plot-dropdown
Add dropdown for X/Y/Z plot
2023-04-29 11:09:04 +03:00
AUTOMATIC1111
dda839f686
Merge pull request #9693 from racinmat/hidable_buttons
adds label to buttons to make them hide
2023-04-29 11:05:51 +03:00
AUTOMATIC1111
cc067555fb
Merge pull request #9813 from arrix/interrogate_download
fix: couldn't remove interrogate_tmp dir while downloading interrogate categories
2023-04-29 10:46:58 +03:00
AUTOMATIC1111
7393fdefd5
Merge pull request #9060 from AlUlkesh/master
fix: lightboxModal, selectedTab
2023-04-29 10:46:10 +03:00
AUTOMATIC1111
d2f3f40a86
Merge pull request #9227 from bbonvi/master
fix disappearing live previews and progressbar during slow tasks
2023-04-29 10:29:47 +03:00
AUTOMATIC1111
1f6aabbd55
Merge pull request #9839 from dennissheng/master
fix ui img2img scripts
2023-04-29 10:26:47 +03:00
AUTOMATIC
86bafb625a put asyncio fix into a function to make it more obvious where it starts and ends 2023-04-29 10:21:01 +03:00
AUTOMATIC1111
24dec9c832
Merge pull request #9319 from wk5ovc/patch-1
Fix #9046 /sdapi/v1/txt2img endpoint not working
2023-04-29 10:14:19 +03:00
AUTOMATIC1111
840d1854cd
Merge pull request #9862 from missionfloyd/extras-sliders-main
Change extras "scale to" to sliders
2023-04-29 10:07:01 +03:00
AUTOMATIC1111
5b4bcea956
Merge pull request #9757 from missionfloyd/editattention-selectcurrentword
Automatically select current word when adjusting weight with ctrl+up/down
2023-04-29 10:05:37 +03:00
AUTOMATIC
642d96dcc8 use exist_ok=True instead of checking if directory exists 2023-04-29 10:04:01 +03:00
AUTOMATIC1111
39654cc905
Merge pull request #9867 from darnell8/master
Fix CLIP FileExistsError
2023-04-29 10:03:07 +03:00
AUTOMATIC1111
b0f55d374e
Merge pull request #9429 from forsurefr/save-init-images
Add support for saving init images in img2img
2023-04-29 10:01:33 +03:00
AUTOMATIC1111
b0ad46676b
Merge pull request #9884 from aniaan/fix/remove-dup-code
perf(webui): Remove duplicate code
2023-04-29 09:59:40 +03:00
AUTOMATIC1111
cf5bd93200
Merge pull request #9657 from Filexor/master
Add filename pattern for CLIP_stop_at_last_layers (clip skip).
2023-04-29 09:43:43 +03:00
AUTOMATIC
1514add559 remove unneded imports and type signature 2023-04-29 09:42:49 +03:00
AUTOMATIC1111
37c59c2710
Merge pull request #9723 from missionfloyd/extra-network-none
Add "None" option to extra networks dropdowns
2023-04-29 09:42:07 +03:00
AUTOMATIC1111
38f1c8183b
Merge pull request #9513 from ilya-khadykin/fix_batch_processing
fix(extras): fix batch image processing on 'Extras\Batch Process' tab
2023-04-29 09:30:46 +03:00
AUTOMATIC1111
a33d49cc57
Merge branch 'dev' into fix_batch_processing 2023-04-29 09:30:33 +03:00
AUTOMATIC1111
7fc10e0445
Merge pull request #9134 from space-nuko/improve-custom-code-extension
Improve custom code script
2023-04-29 09:25:39 +03:00
AUTOMATIC
5a666f3904 bump gradio to 3.27 2023-04-29 09:21:31 +03:00
AUTOMATIC
101a18fc84 bump gradio to 3.27 2023-04-29 09:17:35 +03:00
catalpaaa
ecdc6471e7 bump gradio to 3.28 2023-04-28 12:23:53 -07:00
Garrett Sutula
aac478cb9d Should be "utils" 2023-04-27 21:40:16 -04:00
Garrett Sutula
ed46abea35 fix for method moved to gradio_client 2023-04-27 21:30:55 -04:00
Garrett Sutula
d1e62b2961 Improve param semantics, 2023-04-27 21:30:19 -04:00
Garrett Sutula
fa74daacbd Update gradio version to version that adds ssl_verify 2023-04-27 20:31:17 -04:00
Garrett Sutula
43186ad084 Add tls_verify arg for use with self-signed certs 2023-04-27 20:29:21 -04:00
aniaan
7ea5be3e29 perf(webui): Remove duplicate code 2023-04-26 20:59:55 +08:00
darnell8
bb426de1cd Fix CLIP FileExistsError 2023-04-25 22:53:06 +08:00
catalpaaa
b2f6e0704e add subpath support 2023-04-25 07:27:24 -07:00
missionfloyd
0e071ae504 Custom delimiters 2023-04-25 08:08:57 -06:00
missionfloyd
84c5b0801a Update postprocessing_upscale.py 2023-04-24 20:07:24 -06:00
dennissheng
bbc7a778d8 fix ui img2img scripts 2023-04-24 17:36:16 +08:00
bbonvi
8ae8aeca75 pull progress for 40 seconds
for some extreme network conditions 20 seconds may not be enough
2023-04-24 13:12:49 +06:00
arrix
05d7a63bbb fix: couldn't remove interrogate_tmp dir 2023-04-23 12:44:12 +08:00
missionfloyd
c1fdba5904
Remove hyphen, underscore delimiters 2023-04-21 13:44:31 -06:00
missionfloyd
27d02597c7
Remove parentheses if weight == 1 2023-04-21 01:37:29 -06:00
Sakura-Luna
eff00413ae Refresh bug fix 2023-04-21 12:34:38 +08:00
missionfloyd
7ef5551634 Update delimiters 2023-04-20 02:24:38 -06:00
missionfloyd
ee172c0fc1 Simplify finding word boundaries
This also makes it work with prompts without spaces between words
2023-04-20 01:34:13 -06:00
missionfloyd
fbd34a6847 Use string.contains() instead of regex 2023-04-20 01:12:59 -06:00
Sakura-Luna
988dd02632 Add img2img refreshed correctly 2023-04-20 15:09:19 +08:00
Sakura-Luna
2c24e09dfc Fix gallery not being refreshed correctly 2023-04-20 14:53:37 +08:00
missionfloyd
e735be8b5b Automatically select current word 2023-04-19 22:04:34 -06:00
TFWol
33365f15bf
Remove old code roll random artists
Removed context menu entry that used to be for rolling artists from the now removed artists.csv.

It was probably meant to be removed at commit 6d805b6.
2023-04-19 14:23:46 -07:00
AdjointOperator
dec5cdd9b8
add tiled inference support for ScuNET 2023-04-19 15:35:50 +08:00
Deciare
d40e44ade4 Option to use CPU for random number generation.
Makes a given manual seed generate the same images across different
platforms, independently of the GPU architecture in use.

Fixes #9613.
2023-04-18 23:27:46 -04:00
missionfloyd
f4b332f041 Add "None" option to extra networks dropdowns 2023-04-18 17:01:46 -06:00
Matěj Račinský
eddcdb8061 adds label to buttons to make them hide 2023-04-17 23:48:28 +02:00
weidong
152ed34ccc
fix install_dir error
When the user's home directory and username are inconsistent, an error message stating that the directory cannot be found will appear. Directly default the installation directory to the user's home directory
2023-04-17 17:17:10 +08:00
siutin
3e5b3c79e4 replace with #wrap_session_call 2023-04-17 13:53:41 +08:00
catboxanon
9de7298898
Update processing.py 2023-04-16 21:06:37 -04:00
catboxanon
234fa9a57d
Update shared.py 2023-04-16 21:06:22 -04:00
catboxanon
4d0c816303
Modify step multiplier flow 2023-04-16 20:39:45 -04:00
catboxanon
81b276a1ea
Add second order samplers compat option 2023-04-16 20:39:18 -04:00
catboxanon
56f8a6b081
Fix sampler schedules with step multiplier 2023-04-16 20:34:52 -04:00
siutin
984970068c multi users support 2023-04-17 01:06:28 +08:00
File_xor
acbec22554 Add self argument that is mandatory to [clip_skip] filename pattern. 2023-04-16 17:14:11 +09:00
File_xor
596556162e Add filename pattern for CLIP_stop_at_last_layers. 2023-04-16 16:49:21 +09:00
tqwuliao
02e3518807 Add new FilenameGenerator [hasprompt<prompt1|default><prompt2>..] 2023-04-15 23:20:08 +08:00
catboxanon
fbab3fc6d1
Only handle image mask if any option enabled 2023-04-14 17:24:55 -04:00
catboxanon
3af152d488
Fix image mask composite for weird resolutions 2023-04-14 17:17:14 -04:00
Sakura-Luna
57a3d146e3 Tooltip localization support 2023-04-14 14:09:33 +08:00
Brad Smith
dab5002c59
sort self.word_embeddings without instantiating it a new dict 2023-04-13 23:19:10 -04:00
Aarni Koskela
fcc194afad prompt-bracket-checker: Simplify + improve error reporting 2023-04-13 23:00:32 +03:00
Vladimir Mandic
7fb72edaff
change index url 2023-04-13 06:47:48 -04:00
gk
8af4b3bbe4 Try using TCMalloc on Linux by default 2023-04-13 10:19:03 +09:00
DGdev91
9edd4b6e51 Using --index-url instead of --extra-index-url following new PyTorch install command 2023-04-11 11:22:28 +02:00
papuSpartan
dff60e2e74
Update sd_models.py 2023-04-10 04:10:50 -05:00
papuSpartan
a9902ca331
Update generation_parameters_copypaste.py 2023-04-10 04:03:01 -05:00
papuSpartan
c510cfd24b
Update shared.py
fix typo
2023-04-10 03:43:56 -05:00
papuSpartan
1c11062603 add token merging options to infotext when necessary. Bump tomesd
version
2023-04-10 03:41:05 -05:00
yike5460
7c62bb2788 fix: support for default branch 2023-04-10 09:38:26 +08:00
bluelovers
c84118d70d feat(xyz): try sort Checkpoint name values 2023-04-10 05:03:02 +08:00
Ilya Khadykin
c19618f370 fix(extras): fix batch image processing on 'Extras\Batch Process' tab
This change fixes an issue where an incorrect type was passed to the PIL.Image.open() function that caused the whole process to fail.

Scope of this change is limited to only batch image processing, and it shouldn't affect other functionality.
2023-04-09 21:33:09 +02:00
yike5460
1aba8d82cb feat: add branch support for extension installation 2023-04-09 22:22:43 +08:00
Brad Smith
27b9ec60e4
sort embeddings by name (case insensitive) 2023-04-08 15:58:00 -04:00
gk
d609f6030e Add [batch_number] and [generation_number] filename patterns 2023-04-07 21:04:46 +09:00
forsurefr
63a6f9b4d9
Do not save init image by default 2023-04-07 12:13:51 +03:00
For Sure
b3593d0997 Add support for saving init images in img2img 2023-04-06 19:42:26 +03:00
Andre Ubuntu
48c06af8dc Pythonic way to achieve it 2023-04-05 20:51:29 -03:00
DGdev91
3a5b47e26e Forcing PyTorch version for AMD GPUs automatic install
The old code tries to install the newest versions of pytorch, wich is currently 2.0. Forcing it to 1.13.1
2023-04-06 01:36:27 +02:00
Andre Ubuntu
52a8f286ef fix preprocess orientation 2023-04-05 20:28:00 -03:00
pangbo13
3ac5f9c471 fix axis swap and infotxt 2023-04-05 21:43:27 +08:00
pangbo13
c01dc1cb30 add dropdown for X/Y/Z plot 2023-04-05 19:22:51 +08:00
Vladimir Mandic
80752f43b2
revert xformers 2023-04-04 17:27:27 -04:00
hitomi
2ba42bfbd2 fix --ldsr-models-path not working 2023-04-04 20:39:51 +08:00
hitomi
539a69860b fix --realesrgan-models-path not working 2023-04-04 20:39:51 +08:00
Nathanael Santoso
3158d17ccf fixed an issue with using ngrok for other connections and also ngrok not using auth_token 2023-04-04 07:41:55 +00:00
papuSpartan
cf5a5773bf :p 2023-04-04 02:39:13 -05:00
papuSpartan
ab195ab0da bump tomesd package version 2023-04-04 02:31:57 -05:00
papuSpartan
5c8e53d5e9 Allow different merge ratios to be used for each pass. Make toggle cmd flag work again. Remove ratio flag. Remove warning about controlnet being incompatible 2023-04-04 02:26:44 -05:00
Nathanael Santoso
2edf73b38f Improved message clarity 2023-04-04 06:57:39 +00:00
Nathanael Santoso
5ebe3b2504 Added guard clause to prevent multiple tunnel creations 2023-04-04 06:50:29 +00:00
space-nuko
7201d940a4 Improve frontend responsiveness for some buttons 2023-04-03 21:27:48 -05:00
Vladimir Mandic
4fa59b045a
update xformers 2023-04-03 15:23:35 -04:00
Liam
54fd00ff8f fixed logic for updating the displayed generation params when the image modal is closed 2023-04-03 13:28:20 -04:00
gmasil
f7215906af
allow styles.csv to be symlinked or mounted in docker without moving the file around 2023-04-03 18:19:57 +02:00
Micky Brunetti
d537a1f1b6
Fix skip-install bug (see #8935) 2023-04-04 00:14:20 +09:00
keith
aef42bfec0
Fix #9046 /sdapi/v1/txt2img endpoint not working
**Describe what this pull request is trying to achieve.**

Fix https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9046

**Environment this was tested in**

* OS: Linux
* Browser: chrome
* Graphics card: RTX 3090
2023-04-03 17:05:49 +08:00
GeorgLegato
d9fdb52149
Update script.js
updated how tabs are presented in DOM with Gradio 3.23.
2023-04-03 04:53:29 +02:00
space-nuko
fbaf6e4fd8 Namespace metadata fields 2023-04-02 21:41:23 -05:00
Pluventi
9a4e650800 Update postprocessing.py
Solution for anyone getting an error when batching on extras, even with a clean install of "stable diffusion webui"
2023-04-03 03:32:48 +02:00
space-nuko
7c016dd642 Calculate shorthash on merge if not exist 2023-04-02 19:06:39 -05:00
space-nuko
afc349c2c0 Add field for model merge type
Incase this is supported by other merge extensions
2023-04-02 18:40:33 -05:00
space-nuko
d132481058 Embed model merge metadata in .safetensors file 2023-04-02 17:41:55 -05:00
ParityError
5225393bde
Merge branch 'AUTOMATIC1111:master' into master 2023-04-01 22:15:12 -07:00
papuSpartan
c707b7df95 remove excess condition 2023-04-01 23:47:10 -05:00
papuSpartan
a609bd56b4 Transition to using settings through UI instead of cmd line args. Added feature to only apply to hr-fix. Install package using requirements_versions.txt 2023-04-01 22:18:35 -05:00
papuSpartan
8c88bf4006 use pypi package for tomesd intead of manually cloning repo 2023-04-01 14:12:12 -05:00
wywywywy
80b847e72d bug: poorman use sample file format not grid 2023-04-01 10:47:49 +01:00
wywywywy
9dc722bcf2 bug: outpaint-mk2 use sample file format not grid 2023-04-01 10:39:50 +01:00
papuSpartan
26ab018253 delay import 2023-04-01 03:31:22 -05:00
papuSpartan
ef8c044051 forgot to add reinstall arg back earlier since args moved out of shared 2023-04-01 03:21:23 -05:00
papuSpartan
56680cd84a first 2023-04-01 02:07:08 -05:00
bbonvi
c938b172a4 fix missing live preview and progress during certain tasks
Sometimes tasks take longer than 5 seconds to start,
resulting in missing progress bar and livepreviews,
so we have to keep pulling for progress a bit longer (5s -> 20s).
2023-03-31 19:34:58 +06:00
Z_nonymous
18e4ca4694 Fix #9185 2023-03-31 10:54:42 +02:00
missionfloyd
3ebdd2afd3 Don't return upscaling_res_switch_btn 2023-03-31 00:56:38 -06:00
missionfloyd
69ad46b047 Import switch_values_symbol 2023-03-30 23:25:39 -06:00
missionfloyd
a73f3bf0cf Change extras "scale to" to sliders 2023-03-30 23:19:40 -06:00
Vladimir Mandic
d5063e07e8 update torch 2023-03-30 10:57:54 -04:00
siutin
70ab21e67d keep randomId simpler 2023-03-30 17:20:09 +08:00
siutin
90366b8d85 tool button 2023-03-30 17:20:09 +08:00
siutin
e0b58527ff use condition to wait for result 2023-03-30 17:20:09 +08:00
siutin
4242e194e4 add a button to restore the current progress 2023-03-30 17:20:09 +08:00
siutin
9407f1731a store the last generated result 2023-03-30 17:20:09 +08:00
siutin
dbca512154 add an internal API for obtaining current task id 2023-03-30 17:20:09 +08:00
devdn
44e8e9c368 fix live preview & alternate uncond guidance for better quality 2023-03-30 00:54:28 -04:00
space-nuko
3ccf6f5ae8 Add webui link 2023-03-29 19:26:52 -05:00
space-nuko
563d048780 Squelch warning if no config restore 2023-03-29 19:22:45 -05:00
space-nuko
1c0544abdb Add links for commits in table, if remote is from GitHub 2023-03-29 19:21:57 -05:00
space-nuko
64bbd3bf03 Make into divs 2023-03-29 19:00:51 -05:00
space-nuko
9b1fa82981 Add filename to UI and config name to filename 2023-03-29 18:55:57 -05:00
space-nuko
f3320b802c Various UI fixes in config state tab 2023-03-29 18:35:25 -05:00
space-nuko
f22d0dde4e Better checking of extension state from Git info 2023-03-29 18:32:29 -05:00
space-nuko
ad5afcaae0 Save/restore working webui/extension configs 2023-03-29 16:55:33 -05:00
Thierry
384bfe22cd Update launch.py 2023-03-29 17:00:20 -04:00
Thierry
baef594e4a Update README.md 2023-03-29 16:58:56 -04:00
Thierry
3c7b928914 Update README.md 2023-03-29 16:52:45 -04:00
space-nuko
67955ca9e5 Make selected tab configurable with UI config 2023-03-29 13:07:12 -05:00
yedpodtrzitko
0d2cf9ac18 feat: use existing virtualenv if already active 2023-03-29 16:35:37 +07:00
space-nuko
79d57d02f1 Improve custom code extension
- Uses `gr.Code` component
- Includes example
- Can return out of body
2023-03-29 01:52:34 -05:00
Vespinian
70a0a11783 Changed behavior that puts the args from alwayson_script request in the script_args, so don't accidently resize the arg list if we get less arg then or default list has 2023-03-28 23:52:51 -04:00
ParityError
f867d7b429
Update README.md
Updated to reflect change in webui.sh, so that the installation directory is not absolute (/home/user).
2023-03-28 18:34:02 -07:00
ParityError
f69acfe9a4
Merge branch 'AUTOMATIC1111:master' into master 2023-03-28 18:29:59 -07:00
ParityError
fb68d93b6a
Update webui-user.sh 2023-03-28 18:27:44 -07:00
devdn
bc90592031 increase range of negative guidance minimum sigma option 2023-03-28 20:59:31 -04:00
devdn
42082e8a32 performance increase 2023-03-28 20:56:01 -04:00
AlUlkesh
5a25826d84 try both versions of appendChild 2023-03-28 23:28:46 +02:00
AUTOMATIC
d667fc435f add "resize by" and "resize to" tabs to img2img 2023-03-28 22:23:40 +03:00
space-nuko
082613036a
Merge branch 'master' into remove-watermark-option 2023-03-27 16:26:23 -05:00
AlUlkesh
9ecf347133 fix: lightboxModal, selectedTab 2023-03-27 20:01:19 +02:00
Reimoo
6f77567e13
Update style.css 2023-03-27 10:08:42 -07:00
Reimoo
527680cd70
Update style.css
Co-authored-by: missionfloyd <missionfloyd@users.noreply.github.com>
2023-03-27 10:00:01 -07:00
missionfloyd
8410b1351e
Merge branch 'AUTOMATIC1111:master' into extra-network-preview-lazyload 2023-03-26 21:50:22 -06:00
missionfloyd
cb8c447f0d Update extra-networks-card.html 2023-03-26 21:47:48 -06:00
missionfloyd
efac2cf1ab Merge branch 'extra-network-preview-lazyload' of https://github.com/missionfloyd/stable-diffusion-webui into extra-network-preview-lazyload 2023-03-26 21:47:05 -06:00
Reimoo
88515267b9
Changed: extra network height css
Changed it so cards take up a set amount of vertical space but added the ability to scroll and resize.
2023-03-26 10:29:19 -07:00
space-nuko
c9647c8d23 Support Gradio's theme API 2023-03-25 16:11:41 -04:00
pieresimakp
fb72066ef6 fixed button position 2023-03-25 23:03:22 +08:00
pieresimakp
e3b9d0e3e8 Merge branch 'master' into img2img-detect-image-size 2023-03-25 23:00:45 +08:00
kurilee
993c11549c
Merge branch 'AUTOMATIC1111:master' into master 2023-03-25 22:47:05 +08:00
kurilee
b2fc7dba2e Add option "keep original size" to textual inversion images preprocess 2023-03-25 22:45:41 +08:00
pieresimakp
771ea212de added button to grab the width and height from the loaded image in img2img 2023-03-24 12:41:17 +08:00
space-nuko
d86beb8228 Remove "do not add watermark to images" option 2023-03-23 17:09:59 -04:00
missionfloyd
1d096ed145 Lazy load extra network images 2023-03-21 16:07:24 -06:00
Rucadi
a80d7d090c
Update script_callbacks.py 2023-03-21 18:47:05 +01:00
ParityError
34c0f499c5
Merge branch 'AUTOMATIC1111:master' into master 2023-03-17 00:36:17 -07:00
Vespinian
f6374934db Changed img2img scriptrunner for gui request from scripts_txt2img to scripts_img2img 2023-03-15 17:53:32 -04:00
unknown
54291f9d63
remove redundant load 2023-03-15 04:33:38 -05:00
InvincibleDude
f5e4436453
Merge branch 'master' into improved-hr-conflict-test 2023-03-14 16:55:59 +03:00
unknown
40dc0132df
modularize 2023-03-13 03:39:02 -05:00
ParityError
5c051c0618
Update webui.sh
Installation should not be assumed to be located within ~/home directory. User should be expected to install project anywhere and run the startup scripts while in stable-diffusion-webui directory.

See issue #8534
2023-03-12 15:10:44 -07:00
ParityError
6439e72df2
Update webui.sh
Installation should not be assumed to be located within ~/home directory. User should be expected to install project anywhere and run the startup scripts while in stable-diffusion-webui directory.

See issue #8534
2023-03-12 15:08:26 -07:00
ParityError
d78c437583 Update webui-user.sh
Installation should not be assumed to be located within ~/home directory. User should expected to install project anywhere and run the startup scripts while in stable-diffusion-webui directory.
2023-03-12 12:41:27 -07:00
InvincibleDude
f6e2737840
Negative prompt fix 2023-03-10 12:13:55 +00:00
InvincibleDude
b9fdb9f701
Fix crash when hr is disabled 2023-03-04 18:09:05 +00:00
InvincibleDude
e97b83bdbb
Merge branch 'master' into improved-hr-conflict-test 2023-03-03 19:49:24 +03:00
InvincibleDude
51f81efb02 Image processing changes
Image processing changes
2023-03-03 19:45:33 +03:00
unknown
bfa14db2cb
enable gallery scrolling functionality for horizontal scroll and gamepads 2023-02-07 16:54:12 -06:00
InvincibleDude
c3bd113a0b
Image info fix 2023-02-05 15:24:41 +00:00
InvincibleDude
f4b78e73a4
Merge branch 'AUTOMATIC1111:master' into improved-hr-conflict-test 2023-02-05 18:02:44 +03:00
unknown
501d4e9cf1
Merge branch 'master' of github.com:AUTOMATIC1111/stable-diffusion-webui into gamepad 2023-02-05 07:24:57 -06:00
unknown
5e1f4f7464
Merge branch 'master' of github.com:AUTOMATIC1111/stable-diffusion-webui into gamepad 2023-02-03 20:39:42 -06:00
rucadi
5ca4230524 Merge branch 'master' of https://github.com/Rucadi/stable-diffusion-webui-polling 2023-02-02 20:12:08 +01:00
rucadi
eb5eb8aa11 Add a callback called before reloading the server 2023-02-02 20:10:47 +01:00
rucadi
3662a274e2 Add polling callback 2023-02-02 20:10:47 +01:00
unknown
ade40aa1a0 Merge branch 'master' of github.com:AUTOMATIC1111/stable-diffusion-webui into gamepad 2023-01-31 02:33:10 -06:00
InvincibleDude
3ec2eb8bf1
Merge branch 'master' into improved-hr-conflict-test 2023-01-30 15:35:13 +03:00
unknown
21766a0898 Merge branch 'master' of github.com:AUTOMATIC1111/stable-diffusion-webui into gamepad 2023-01-30 05:12:31 -06:00
InvincibleDude
0d834b9394
Merge pull request #2 from InvincibleDude/extra-networks-test
Extra networks test
2023-01-29 20:40:06 +03:00
invincibledude
425eab3464 Extra network in hr abomination fix 2023-01-29 19:26:31 +03:00
invincibledude
9beeef6267 Extra networks loading fix 2023-01-29 19:16:17 +03:00
invincibledude
6127d2ff1b Extra networks loading fix 2023-01-29 19:13:27 +03:00
invincibledude
c92ec3a925 Extra networks loading fix 2023-01-29 19:07:00 +03:00
InvincibleDude
ee3d63b6be
Merge branch 'master' into master 2023-01-29 14:36:10 +03:00
unknown
e79b7db4b4 Merge branch 'master' of github.com:AUTOMATIC1111/stable-diffusion-webui into gamepad 2023-01-28 03:40:51 -06:00
unknown
b921a52071 basic image next and prev control with joystick 2023-01-28 03:19:10 -06:00
InvincibleDude
44c0e6b993
Merge branch 'AUTOMATIC1111:master' into master 2023-01-24 15:44:09 +03:00
invincibledude
3bc8ee998d Gen params paste improvement 2023-01-22 16:35:42 +03:00
invincibledude
7f62300f7d Gen params paste improvement 2023-01-22 16:29:08 +03:00
invincibledude
fccc39834a Gen params paste improvement 2023-01-22 16:17:55 +03:00
invincibledude
d261bec1ec Gen params paste improvement 2023-01-22 16:14:28 +03:00
invincibledude
1fa777c1d7 Gen params paste improvement 2023-01-22 16:03:42 +03:00
invincibledude
2aaee73633 Gen params paste improvement 2023-01-22 16:00:35 +03:00
invincibledude
a5c2b5ed89 UI and PNG info improvements 2023-01-22 15:50:20 +03:00
invincibledude
bbb1e35ea2 UI and PNG info improvements 2023-01-22 15:44:59 +03:00
invincibledude
b0ae92d605 UI improvements 2023-01-22 15:43:12 +03:00
invincibledude
34f6d66742 hr conditioning 2023-01-22 15:32:47 +03:00
invincibledude
125d5c8d96 hr conditioning 2023-01-22 15:31:11 +03:00
invincibledude
2ab2bce74d hr conditioning 2023-01-22 15:28:38 +03:00
invincibledude
c5d4c87c02 hr conditioning 2023-01-22 15:17:43 +03:00
invincibledude
4e0cf7d4ed hr conditioning 2023-01-22 15:15:08 +03:00
invincibledude
a9f0e7d536 hr conditioning 2023-01-22 15:12:00 +03:00
invincibledude
f774a8d24e Hr-fix separate prompt experimentation 2023-01-22 14:52:01 +03:00
invincibledude
81e0723d65 Logging for debugging 2023-01-22 14:41:41 +03:00
invincibledude
b331ca784a Fix 2023-01-22 14:35:34 +03:00
invincibledude
8114959e7e Hr separate prompt test 2023-01-22 14:28:53 +03:00
InvincibleDude
cd14e7e8fd
Revert 2023-01-22 00:33:21 +03:00
InvincibleDude
35b4104daf
Change to run workflow 2023-01-22 00:32:48 +03:00
invincibledude
f7b38c4841 Style fix 2023-01-22 00:18:26 +03:00
invincibledude
0f6862ef30 PLMS edge-case handling fix 5 2023-01-22 00:11:05 +03:00
invincibledude
6cd7bf9f86 PLMS edge-case handling fix 3 2023-01-22 00:08:58 +03:00
invincibledude
3ffe2e768b PLMS edge-case handling fix 2 2023-01-22 00:07:46 +03:00
invincibledude
9e1f49c4e5 PLMS edge-case handling fix 2023-01-22 00:03:16 +03:00
invincibledude
8bec3a2aa1 Index fix 2023-01-21 23:31:36 +03:00
invincibledude
6c0566f937 Type mismatch fix 2023-01-21 23:25:36 +03:00
invincibledude
3bd898b6ce First test of different sampler for hi-res fix 2023-01-21 23:14:59 +03:00
unknown
876da12599 Merge branch 'master' of github.com:AUTOMATIC1111/stable-diffusion-webui 2022-12-25 02:03:55 -06:00
rucadi
0c8825b2be Add a callback called before reloading the server 2022-12-16 18:31:20 +01:00
rucadi
1742c04bab Add polling callback 2022-12-16 17:10:13 +01:00
unknown
d6fdfde9d7 Merge branch 'master' of github.com:AUTOMATIC1111/stable-diffusion-webui 2022-12-12 09:12:26 -06:00
unknown
4005cd66e0 Merge branch 'master' of github.com:AUTOMATIC1111/stable-diffusion-webui 2022-12-10 04:57:12 -06:00
unknown
4a3d05b657 Merge branch 'master' of github.com:AUTOMATIC1111/stable-diffusion-webui 2022-12-10 02:30:31 -06:00
210 changed files with 13477 additions and 5735 deletions

4
.eslintignore Normal file
View File

@ -0,0 +1,4 @@
extensions
extensions-disabled
repositories
venv

91
.eslintrc.js Normal file
View File

@ -0,0 +1,91 @@
/* global module */
module.exports = {
env: {
browser: true,
es2021: true,
},
extends: "eslint:recommended",
parserOptions: {
ecmaVersion: "latest",
},
rules: {
"arrow-spacing": "error",
"block-spacing": "error",
"brace-style": "error",
"comma-dangle": ["error", "only-multiline"],
"comma-spacing": "error",
"comma-style": ["error", "last"],
"curly": ["error", "multi-line", "consistent"],
"eol-last": "error",
"func-call-spacing": "error",
"function-call-argument-newline": ["error", "consistent"],
"function-paren-newline": ["error", "consistent"],
"indent": ["error", 4],
"key-spacing": "error",
"keyword-spacing": "error",
"linebreak-style": ["error", "unix"],
"no-extra-semi": "error",
"no-mixed-spaces-and-tabs": "error",
"no-multi-spaces": "error",
"no-redeclare": ["error", {builtinGlobals: false}],
"no-trailing-spaces": "error",
"no-unused-vars": "off",
"no-whitespace-before-property": "error",
"object-curly-newline": ["error", {consistent: true, multiline: true}],
"object-curly-spacing": ["error", "never"],
"operator-linebreak": ["error", "after"],
"quote-props": ["error", "consistent-as-needed"],
"semi": ["error", "always"],
"semi-spacing": "error",
"semi-style": ["error", "last"],
"space-before-blocks": "error",
"space-before-function-paren": ["error", "never"],
"space-in-parens": ["error", "never"],
"space-infix-ops": "error",
"space-unary-ops": "error",
"switch-colon-spacing": "error",
"template-curly-spacing": ["error", "never"],
"unicode-bom": "error",
},
globals: {
//script.js
gradioApp: "readonly",
executeCallbacks: "readonly",
onAfterUiUpdate: "readonly",
onOptionsChanged: "readonly",
onUiLoaded: "readonly",
onUiUpdate: "readonly",
uiCurrentTab: "writable",
uiElementInSight: "readonly",
uiElementIsVisible: "readonly",
//ui.js
opts: "writable",
all_gallery_buttons: "readonly",
selected_gallery_button: "readonly",
selected_gallery_index: "readonly",
switch_to_txt2img: "readonly",
switch_to_img2img_tab: "readonly",
switch_to_img2img: "readonly",
switch_to_sketch: "readonly",
switch_to_inpaint: "readonly",
switch_to_inpaint_sketch: "readonly",
switch_to_extras: "readonly",
get_tab_index: "readonly",
create_submit_args: "readonly",
restart_reload: "readonly",
updateInput: "readonly",
//extraNetworks.js
requestGet: "readonly",
popup: "readonly",
// from python
localization: "readonly",
// progrssbar.js
randomId: "readonly",
requestProgress: "readonly",
// imageviewer.js
modalPrevImage: "readonly",
modalNextImage: "readonly",
// token-counters.js
setupTokenCounters: "readonly",
}
};

2
.git-blame-ignore-revs Normal file
View File

@ -0,0 +1,2 @@
# Apply ESlint
9c54b78d9dde5601e916f308d9a9d6953ec39430

View File

@ -43,10 +43,19 @@ body:
- type: input - type: input
id: commit id: commit
attributes: attributes:
label: Commit where the problem happens label: Version or Commit where the problem happens
description: Which commit are you running ? (Do not write *Latest version/repo/commit*, as this means nothing and will have changed by the time we read your issue. Rather, copy the **Commit** link at the bottom of the UI, or from the cmd/terminal if you can't launch it.) description: "Which webui version or commit are you running ? (Do not write *Latest Version/repo/commit*, as this means nothing and will have changed by the time we read your issue. Rather, copy the **Version: v1.2.3** link at the bottom of the UI, or from the cmd/terminal if you can't launch it.)"
validations: validations:
required: true required: true
- type: dropdown
id: py-version
attributes:
label: What Python version are you running on ?
multiple: false
options:
- Python 3.10.x
- Python 3.11.x (above, no supported yet)
- Python 3.9.x (below, no recommended)
- type: dropdown - type: dropdown
id: platforms id: platforms
attributes: attributes:
@ -59,6 +68,35 @@ body:
- iOS - iOS
- Android - Android
- Other/Cloud - Other/Cloud
- type: dropdown
id: device
attributes:
label: What device are you running WebUI on?
multiple: true
options:
- Nvidia GPUs (RTX 20 above)
- Nvidia GPUs (GTX 16 below)
- AMD GPUs (RX 6000 above)
- AMD GPUs (RX 5000 below)
- CPU
- Other GPUs
- type: dropdown
id: cross_attention_opt
attributes:
label: Cross attention optimization
description: What cross attention optimization are you using, Settings -> Optimizations -> Cross attention optimization
multiple: false
options:
- Automatic
- xformers
- sdp-no-mem
- sdp
- Doggettx
- V1
- InvokeAI
- "None "
validations:
required: true
- type: dropdown - type: dropdown
id: browsers id: browsers
attributes: attributes:

View File

@ -1,5 +1,5 @@
blank_issues_enabled: false blank_issues_enabled: false
contact_links: contact_links:
- name: WebUI Community Support - name: WebUI Community Support
url: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions url: https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions
about: Please ask and answer questions here. about: Please ask and answer questions here.

View File

@ -1,28 +1,15 @@
# Please read the [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing) before submitting a pull request! ## Description
If you have a large change, pay special attention to this paragraph: * a simple description of what you're trying to accomplish
* a summary of changes in code
* which issues it fixes, if any
> Before making changes, if you think that your feature will result in more than 100 lines changing, find me and talk to me about the feature you are proposing. It pains me to reject the hard work someone else did, but I won't add everything to the repo, and it's better if the rejection happens before you have to waste time working on the feature. ## Screenshots/videos:
Otherwise, after making sure you're following the rules described in wiki page, remove this section and continue on.
**Describe what this pull request is trying to achieve.** ## Checklist:
A clear and concise description of what you're trying to accomplish with this, so your intent doesn't have to be extracted from your code. - [ ] I have read [contributing wiki page](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
- [ ] I have performed a self-review of my own code
**Additional notes and description of your changes** - [ ] My code follows the [style guidelines](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing#code-style)
- [ ] My code passes [tests](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Tests)
More technical discussion about your changes go here, plus anything that a maintainer might have to specifically take a look at, or be wary of.
**Environment this was tested in**
List the environment you have developed / tested this on. As per the contributing page, changes should be able to work on Windows out of the box.
- OS: [e.g. Windows, Linux]
- Browser: [e.g. chrome, safari]
- Graphics card: [e.g. NVIDIA RTX 2080 8GB, AMD RX 6600 8GB]
**Screenshots or videos of your changes**
If applicable, screenshots or a video showing off your changes. If it edits an existing UI, it should ideally contain a comparison of what used to be there, before your changes were made.
This is **required** for anything that touches the user interface.

View File

@ -1,39 +1,38 @@
# See https://github.com/actions/starter-workflows/blob/1067f16ad8a1eac328834e4b0ae24f7d206f810d/ci/pylint.yml for original reference file name: Linter
name: Run Linting/Formatting on Pull Requests
on: on:
- push - push
- pull_request - pull_request
# See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#onpull_requestpull_request_targetbranchesbranches-ignore for syntax docs
# if you want to filter out branches, delete the `- pull_request` and uncomment these lines :
# pull_request:
# branches:
# - master
# branches-ignore:
# - development
jobs: jobs:
lint: lint-python:
name: ruff
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name
steps: steps:
- name: Checkout Code - name: Checkout Code
uses: actions/checkout@v3 uses: actions/checkout@v3
- name: Set up Python 3.10 - uses: actions/setup-python@v4
uses: actions/setup-python@v4
with: with:
python-version: 3.10.6 python-version: 3.11
cache: pip # NB: there's no cache: pip here since we're not installing anything
cache-dependency-path: | # from the requirements.txt file(s) in the repository; it's faster
**/requirements*txt # not to have GHA download an (at the time of writing) 4 GB cache
- name: Install PyLint # of PyTorch and other dependencies.
run: | - name: Install Ruff
python -m pip install --upgrade pip run: pip install ruff==0.0.272
pip install pylint - name: Run Ruff
# This lets PyLint check to see if it can resolve imports run: ruff .
- name: Install dependencies lint-js:
run: | name: eslint
export COMMANDLINE_ARGS="--skip-torch-cuda-test --exit" runs-on: ubuntu-latest
python launch.py if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name
- name: Analysing the code with pylint steps:
run: | - name: Checkout Code
pylint $(git ls-files '*.py') uses: actions/checkout@v3
- name: Install Node.js
uses: actions/setup-node@v3
with:
node-version: 18
- run: npm i --ci
- run: npm run lint

View File

@ -1,4 +1,4 @@
name: Run basic features tests on CPU with empty SD model name: Tests
on: on:
- push - push
@ -6,7 +6,9 @@ on:
jobs: jobs:
test: test:
name: tests on CPU with empty model
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name
steps: steps:
- name: Checkout Code - name: Checkout Code
uses: actions/checkout@v3 uses: actions/checkout@v3
@ -17,13 +19,55 @@ jobs:
cache: pip cache: pip
cache-dependency-path: | cache-dependency-path: |
**/requirements*txt **/requirements*txt
launch.py
- name: Install test dependencies
run: pip install wait-for-it -r requirements-test.txt
env:
PIP_DISABLE_PIP_VERSION_CHECK: "1"
PIP_PROGRESS_BAR: "off"
- name: Setup environment
run: python launch.py --skip-torch-cuda-test --exit
env:
PIP_DISABLE_PIP_VERSION_CHECK: "1"
PIP_PROGRESS_BAR: "off"
TORCH_INDEX_URL: https://download.pytorch.org/whl/cpu
WEBUI_LAUNCH_LIVE_OUTPUT: "1"
PYTHONUNBUFFERED: "1"
- name: Start test server
run: >
python -m coverage run
--data-file=.coverage.server
launch.py
--skip-prepare-environment
--skip-torch-cuda-test
--test-server
--do-not-download-clip
--no-half
--disable-opt-split-attention
--use-cpu all
--api-server-stop
2>&1 | tee output.txt &
- name: Run tests - name: Run tests
run: python launch.py --tests test --no-half --disable-opt-split-attention --use-cpu all --skip-torch-cuda-test run: |
- name: Upload main app stdout-stderr wait-for-it --service 127.0.0.1:7860 -t 600
python -m pytest -vv --junitxml=test/results.xml --cov . --cov-report=xml --verify-base-url test
- name: Kill test server
if: always()
run: curl -vv -XPOST http://127.0.0.1:7860/sdapi/v1/server-stop && sleep 10
- name: Show coverage
run: |
python -m coverage combine .coverage*
python -m coverage report -i
python -m coverage html -i
- name: Upload main app output
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v3
if: always() if: always()
with: with:
name: stdout-stderr name: output
path: | path: output.txt
test/stdout.txt - name: Upload coverage HTML
test/stderr.txt uses: actions/upload-artifact@v3
if: always()
with:
name: htmlcov
path: htmlcov

View File

@ -0,0 +1,19 @@
name: Pull requests can't target master branch
"on":
pull_request:
types:
- opened
- synchronize
- reopened
branches:
- master
jobs:
check:
runs-on: ubuntu-latest
steps:
- name: Warning marge into master
run: |
echo -e "::warning::This pull request directly merge into \"master\" branch, normally development happens on \"dev\" branch."
exit 1

6
.gitignore vendored
View File

@ -32,4 +32,8 @@ notification.mp3
/extensions /extensions
/test/stdout.txt /test/stdout.txt
/test/stderr.txt /test/stderr.txt
/cache.json /cache.json*
/config_states/
/node_modules
/package-lock.json
/.coverage*

352
CHANGELOG.md Normal file
View File

@ -0,0 +1,352 @@
## 1.5.1
### Minor:
* support parsing text encoder blocks in some new LoRAs
* delete scale checker script due to user demand
### Extensions and API:
* add postprocess_batch_list script callback
### Bug Fixes:
* fix TI training for SD1
* fix reload altclip model error
* prepend the pythonpath instead of overriding it
* fix typo in SD_WEBUI_RESTARTING
* if txt2img/img2img raises an exception, finally call state.end()
* fix composable diffusion weight parsing
* restyle Startup profile for black users
* fix webui not launching with --nowebui
* catch exception for non git extensions
* fix some options missing from /sdapi/v1/options
* fix for extension update status always saying "unknown"
* fix display of extra network cards that have `<>` in the name
* update lora extension to work with python 3.8
## 1.5.0
### Features:
* SD XL support
* user metadata system for custom networks
* extended Lora metadata editor: set activation text, default weight, view tags, training info
* Lora extension rework to include other types of networks (all that were previously handled by LyCORIS extension)
* show github stars for extenstions
* img2img batch mode can read extra stuff from png info
* img2img batch works with subdirectories
* hotkeys to move prompt elements: alt+left/right
* restyle time taken/VRAM display
* add textual inversion hashes to infotext
* optimization: cache git extension repo information
* move generate button next to the generated picture for mobile clients
* hide cards for networks of incompatible Stable Diffusion version in Lora extra networks interface
* skip installing packages with pip if they all are already installed - startup speedup of about 2 seconds
### Minor:
* checkbox to check/uncheck all extensions in the Installed tab
* add gradio user to infotext and to filename patterns
* allow gif for extra network previews
* add options to change colors in grid
* use natural sort for items in extra networks
* Mac: use empty_cache() from torch 2 to clear VRAM
* added automatic support for installing the right libraries for Navi3 (AMD)
* add option SWIN_torch_compile to accelerate SwinIR upscale
* suppress printing TI embedding info at start to console by default
* speedup extra networks listing
* added `[none]` filename token.
* removed thumbs extra networks view mode (use settings tab to change width/height/scale to get thumbs)
* add always_discard_next_to_last_sigma option to XYZ plot
* automatically switch to 32-bit float VAE if the generated picture has NaNs without the need for `--no-half-vae` commandline flag.
### Extensions and API:
* api endpoints: /sdapi/v1/server-kill, /sdapi/v1/server-restart, /sdapi/v1/server-stop
* allow Script to have custom metaclass
* add model exists status check /sdapi/v1/options
* rename --add-stop-route to --api-server-stop
* add `before_hr` script callback
* add callback `after_extra_networks_activate`
* disable rich exception output in console for API by default, use WEBUI_RICH_EXCEPTIONS env var to enable
* return http 404 when thumb file not found
* allow replacing extensions index with environment variable
### Bug Fixes:
* fix for catch errors when retrieving extension index #11290
* fix very slow loading speed of .safetensors files when reading from network drives
* API cache cleanup
* fix UnicodeEncodeError when writing to file CLIP Interrogator batch mode
* fix warning of 'has_mps' deprecated from PyTorch
* fix problem with extra network saving images as previews losing generation info
* fix throwing exception when trying to resize image with I;16 mode
* fix for #11534: canvas zoom and pan extension hijacking shortcut keys
* fixed launch script to be runnable from any directory
* don't add "Seed Resize: -1x-1" to API image metadata
* correctly remove end parenthesis with ctrl+up/down
* fixing --subpath on newer gradio version
* fix: check fill size none zero when resize (fixes #11425)
* use submit and blur for quick settings textbox
* save img2img batch with images.save_image()
* prevent running preload.py for disabled extensions
* fix: previously, model name was added together with directory name to infotext and to [model_name] filename pattern; directory name is now not included
## 1.4.1
### Bug Fixes:
* add queue lock for refresh-checkpoints
## 1.4.0
### Features:
* zoom controls for inpainting
* run basic torch calculation at startup in parallel to reduce the performance impact of first generation
* option to pad prompt/neg prompt to be same length
* remove taming_transformers dependency
* custom k-diffusion scheduler settings
* add an option to show selected settings in main txt2img/img2img UI
* sysinfo tab in settings
* infer styles from prompts when pasting params into the UI
* an option to control the behavior of the above
### Minor:
* bump Gradio to 3.32.0
* bump xformers to 0.0.20
* Add option to disable token counters
* tooltip fixes & optimizations
* make it possible to configure filename for the zip download
* `[vae_filename]` pattern for filenames
* Revert discarding penultimate sigma for DPM-Solver++(2M) SDE
* change UI reorder setting to multiselect
* read version info form CHANGELOG.md if git version info is not available
* link footer API to Wiki when API is not active
* persistent conds cache (opt-in optimization)
### Extensions:
* After installing extensions, webui properly restarts the process rather than reloads the UI
* Added VAE listing to web API. Via: /sdapi/v1/sd-vae
* custom unet support
* Add onAfterUiUpdate callback
* refactor EmbeddingDatabase.register_embedding() to allow unregistering
* add before_process callback for scripts
* add ability for alwayson scripts to specify section and let user reorder those sections
### Bug Fixes:
* Fix dragging text to prompt
* fix incorrect quoting for infotext values with colon in them
* fix "hires. fix" prompt sharing same labels with txt2img_prompt
* Fix s_min_uncond default type int
* Fix for #10643 (Inpainting mask sometimes not working)
* fix bad styling for thumbs view in extra networks #10639
* fix for empty list of optimizations #10605
* small fixes to prepare_tcmalloc for Debian/Ubuntu compatibility
* fix --ui-debug-mode exit
* patch GitPython to not use leaky persistent processes
* fix duplicate Cross attention optimization after UI reload
* torch.cuda.is_available() check for SdOptimizationXformers
* fix hires fix using wrong conds in second pass if using Loras.
* handle exception when parsing generation parameters from png info
* fix upcast attention dtype error
* forcing Torch Version to 1.13.1 for RX 5000 series GPUs
* split mask blur into X and Y components, patch Outpainting MK2 accordingly
* don't die when a LoRA is a broken symlink
* allow activation of Generate Forever during generation
## 1.3.2
### Bug Fixes:
* fix files served out of tmp directory even if they are saved to disk
* fix postprocessing overwriting parameters
## 1.3.1
### Features:
* revert default cross attention optimization to Doggettx
### Bug Fixes:
* fix bug: LoRA don't apply on dropdown list sd_lora
* fix png info always added even if setting is not enabled
* fix some fields not applying in xyz plot
* fix "hires. fix" prompt sharing same labels with txt2img_prompt
* fix lora hashes not being added properly to infotex if there is only one lora
* fix --use-cpu failing to work properly at startup
* make --disable-opt-split-attention command line option work again
## 1.3.0
### Features:
* add UI to edit defaults
* token merging (via dbolya/tomesd)
* settings tab rework: add a lot of additional explanations and links
* load extensions' Git metadata in parallel to loading the main program to save a ton of time during startup
* update extensions table: show branch, show date in separate column, and show version from tags if available
* TAESD - another option for cheap live previews
* allow choosing sampler and prompts for second pass of hires fix - hidden by default, enabled in settings
* calculate hashes for Lora
* add lora hashes to infotext
* when pasting infotext, use infotext's lora hashes to find local loras for `<lora:xxx:1>` entries whose hashes match loras the user has
* select cross attention optimization from UI
### Minor:
* bump Gradio to 3.31.0
* bump PyTorch to 2.0.1 for macOS and Linux AMD
* allow setting defaults for elements in extensions' tabs
* allow selecting file type for live previews
* show "Loading..." for extra networks when displaying for the first time
* suppress ENSD infotext for samplers that don't use it
* clientside optimizations
* add options to show/hide hidden files and dirs in extra networks, and to not list models/files in hidden directories
* allow whitespace in styles.csv
* add option to reorder tabs
* move some functionality (swap resolution and set seed to -1) to client
* option to specify editor height for img2img
* button to copy image resolution into img2img width/height sliders
* switch from pyngrok to ngrok-py
* lazy-load images in extra networks UI
* set "Navigate image viewer with gamepad" option to false by default, by request
* change upscalers to download models into user-specified directory (from commandline args) rather than the default models/<...>
* allow hiding buttons in ui-config.json
### Extensions:
* add /sdapi/v1/script-info api
* use Ruff to lint Python code
* use ESlint to lint Javascript code
* add/modify CFG callbacks for Self-Attention Guidance extension
* add command and endpoint for graceful server stopping
* add some locals (prompts/seeds/etc) from processing function into the Processing class as fields
* rework quoting for infotext items that have commas in them to use JSON (should be backwards compatible except for cases where it didn't work previously)
* add /sdapi/v1/refresh-loras api checkpoint post request
* tests overhaul
### Bug Fixes:
* fix an issue preventing the program from starting if the user specifies a bad Gradio theme
* fix broken prompts from file script
* fix symlink scanning for extra networks
* fix --data-dir ignored when launching via webui-user.bat COMMANDLINE_ARGS
* allow web UI to be ran fully offline
* fix inability to run with --freeze-settings
* fix inability to merge checkpoint without adding metadata
* fix extra networks' save preview image not adding infotext for jpeg/webm
* remove blinking effect from text in hires fix and scale resolution preview
* make links to `http://<...>.git` extensions work in the extension tab
* fix bug with webui hanging at startup due to hanging git process
## 1.2.1
### Features:
* add an option to always refer to LoRA by filenames
### Bug Fixes:
* never refer to LoRA by an alias if multiple LoRAs have same alias or the alias is called none
* fix upscalers disappearing after the user reloads UI
* allow bf16 in safe unpickler (resolves problems with loading some LoRAs)
* allow web UI to be ran fully offline
* fix localizations not working
* fix error for LoRAs: `'LatentDiffusion' object has no attribute 'lora_layer_mapping'`
## 1.2.0
### Features:
* do not wait for Stable Diffusion model to load at startup
* add filename patterns: `[denoising]`
* directory hiding for extra networks: dirs starting with `.` will hide their cards on extra network tabs unless specifically searched for
* LoRA: for the `<...>` text in prompt, use name of LoRA that is in the metdata of the file, if present, instead of filename (both can be used to activate LoRA)
* LoRA: read infotext params from kohya-ss's extension parameters if they are present and if his extension is not active
* LoRA: fix some LoRAs not working (ones that have 3x3 convolution layer)
* LoRA: add an option to use old method of applying LoRAs (producing same results as with kohya-ss)
* add version to infotext, footer and console output when starting
* add links to wiki for filename pattern settings
* add extended info for quicksettings setting and use multiselect input instead of a text field
### Minor:
* bump Gradio to 3.29.0
* bump PyTorch to 2.0.1
* `--subpath` option for gradio for use with reverse proxy
* Linux/macOS: use existing virtualenv if already active (the VIRTUAL_ENV environment variable)
* do not apply localizations if there are none (possible frontend optimization)
* add extra `None` option for VAE in XYZ plot
* print error to console when batch processing in img2img fails
* create HTML for extra network pages only on demand
* allow directories starting with `.` to still list their models for LoRA, checkpoints, etc
* put infotext options into their own category in settings tab
* do not show licenses page when user selects Show all pages in settings
### Extensions:
* tooltip localization support
* add API method to get LoRA models with prompt
### Bug Fixes:
* re-add `/docs` endpoint
* fix gamepad navigation
* make the lightbox fullscreen image function properly
* fix squished thumbnails in extras tab
* keep "search" filter for extra networks when user refreshes the tab (previously it showed everthing after you refreshed)
* fix webui showing the same image if you configure the generation to always save results into same file
* fix bug with upscalers not working properly
* fix MPS on PyTorch 2.0.1, Intel Macs
* make it so that custom context menu from contextMenu.js only disappears after user's click, ignoring non-user click events
* prevent Reload UI button/link from reloading the page when it's not yet ready
* fix prompts from file script failing to read contents from a drag/drop file
## 1.1.1
### Bug Fixes:
* fix an error that prevents running webui on PyTorch<2.0 without --disable-safe-unpickle
## 1.1.0
### Features:
* switch to PyTorch 2.0.0 (except for AMD GPUs)
* visual improvements to custom code scripts
* add filename patterns: `[clip_skip]`, `[hasprompt<>]`, `[batch_number]`, `[generation_number]`
* add support for saving init images in img2img, and record their hashes in infotext for reproducability
* automatically select current word when adjusting weight with ctrl+up/down
* add dropdowns for X/Y/Z plot
* add setting: Stable Diffusion/Random number generator source: makes it possible to make images generated from a given manual seed consistent across different GPUs
* support Gradio's theme API
* use TCMalloc on Linux by default; possible fix for memory leaks
* add optimization option to remove negative conditioning at low sigma values #9177
* embed model merge metadata in .safetensors file
* extension settings backup/restore feature #9169
* add "resize by" and "resize to" tabs to img2img
* add option "keep original size" to textual inversion images preprocess
* image viewer scrolling via analog stick
* button to restore the progress from session lost / tab reload
### Minor:
* bump Gradio to 3.28.1
* change "scale to" to sliders in Extras tab
* add labels to tool buttons to make it possible to hide them
* add tiled inference support for ScuNET
* add branch support for extension installation
* change Linux installation script to install into current directory rather than `/home/username`
* sort textual inversion embeddings by name (case-insensitive)
* allow styles.csv to be symlinked or mounted in docker
* remove the "do not add watermark to images" option
* make selected tab configurable with UI config
* make the extra networks UI fixed height and scrollable
* add `disable_tls_verify` arg for use with self-signed certs
### Extensions:
* add reload callback
* add `is_hr_pass` field for processing
### Bug Fixes:
* fix broken batch image processing on 'Extras/Batch Process' tab
* add "None" option to extra networks dropdowns
* fix FileExistsError for CLIP Interrogator
* fix /sdapi/v1/txt2img endpoint not working on Linux #9319
* fix disappearing live previews and progressbar during slow tasks
* fix fullscreen image view not working properly in some cases
* prevent alwayson_scripts args param resizing script_arg list when they are inserted in it
* fix prompt schedule for second order samplers
* fix image mask/composite for weird resolutions #9628
* use correct images for previews when using AND (see #9491)
* one broken image in img2img batch won't stop all processing
* fix image orientation bug in train/preprocess
* fix Ngrok recreating tunnels every reload
* fix `--realesrgan-models-path` and `--ldsr-models-path` not working
* fix `--skip-install` not working
* use SAMPLE file format in Outpainting Mk2 & Poorman
* do not fail all LoRAs if some have failed to load when making a picture
## 1.0.0
* everything

View File

@ -2,7 +2,7 @@
# if you were managing a localization and were removed from this file, this is because # if you were managing a localization and were removed from this file, this is because
# the intended way to do localizations now is via extensions. See: # the intended way to do localizations now is via extensions. See:
# https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Developing-extensions # https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Developing-extensions
# Make a repo with your localization and since you are still listed as a collaborator # Make a repo with your localization and since you are still listed as a collaborator
# you can add it to the wiki page yourself. This change is because some people complained # you can add it to the wiki page yourself. This change is because some people complained
# the git commit log is cluttered with things unrelated to almost everyone and # the git commit log is cluttered with things unrelated to almost everyone and

View File

@ -4,7 +4,7 @@ A browser interface based on Gradio library for Stable Diffusion.
![](screenshot.png) ![](screenshot.png)
## Features ## Features
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features): [Detailed feature showcase with images](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
- Original txt2img and img2img modes - Original txt2img and img2img modes
- One click install and run script (but you still must install python and git) - One click install and run script (but you still must install python and git)
- Outpainting - Outpainting
@ -15,7 +15,7 @@ A browser interface based on Gradio library for Stable Diffusion.
- Attention, specify parts of text that the model should pay more attention to - Attention, specify parts of text that the model should pay more attention to
- a man in a `((tuxedo))` - will pay more attention to tuxedo - a man in a `((tuxedo))` - will pay more attention to tuxedo
- a man in a `(tuxedo:1.21)` - alternative syntax - a man in a `(tuxedo:1.21)` - alternative syntax
- select text and press `Ctrl+Up` or `Ctrl+Down` to automatically adjust attention to selected text (code contributed by anonymous user) - select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times - Loopback, run img2img processing multiple times
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters - X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
- Textual Inversion - Textual Inversion
@ -28,7 +28,7 @@ A browser interface based on Gradio library for Stable Diffusion.
- CodeFormer, face restoration tool as an alternative to GFPGAN - CodeFormer, face restoration tool as an alternative to GFPGAN
- RealESRGAN, neural network upscaler - RealESRGAN, neural network upscaler
- ESRGAN, neural network upscaler with a lot of third party models - ESRGAN, neural network upscaler with a lot of third party models
- SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers - SwinIR and Swin2SR ([see here](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
- LDSR, Latent diffusion super resolution upscaling - LDSR, Latent diffusion super resolution upscaling
- Resizing aspect ratio options - Resizing aspect ratio options
- Sampling method selection - Sampling method selection
@ -63,14 +63,14 @@ A browser interface based on Gradio library for Stable Diffusion.
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions - Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly - Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one - Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community - [Custom scripts](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once - [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
- separate prompts using uppercase `AND` - separate prompts using uppercase `AND`
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2` - also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens) - No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts - DeepDanbooru integration, creates danbooru style tags for anime prompts
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args) - [xformers](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args)
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI - via extension: [History tab](https://ghproxy.com/https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
- Generate forever option - Generate forever option
- Training tab - Training tab
- hypernetworks and embeddings options - hypernetworks and embeddings options
@ -82,10 +82,10 @@ A browser interface based on Gradio library for Stable Diffusion.
- Can select to load a different VAE from settings screen - Can select to load a different VAE from settings screen
- Estimated completion time in progress bar - Estimated completion time in progress bar
- API - API
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML - Support for dedicated [inpainting model](https://ghproxy.com/https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients)) - via extension: [Aesthetic Gradients](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://ghproxy.com/https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://ghproxy.com/https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions - [Stable Diffusion 2.0](https://ghproxy.com/https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions - [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
- Now without any bad letters! - Now without any bad letters!
- Load checkpoints in safetensors format - Load checkpoints in safetensors format
- Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64 - Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64
@ -93,16 +93,22 @@ A browser interface based on Gradio library for Stable Diffusion.
- Reorder elements in the UI from settings screen - Reorder elements in the UI from settings screen
## Installation and Running ## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs. Make sure the required [dependencies](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
Alternatively, use online services (like Google Colab): Alternatively, use online services (like Google Colab):
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services) - [List of Online Services](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Installation on Windows 10/11 with NVidia-GPUs using release package
1. Download `sd.webui.zip` from [v1.0.0-pre](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract it's contents.
2. Run `update.bat`.
3. Run `run.bat`.
> For more details see [Install-and-Run-on-NVidia-GPUs](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs)
### Automatic Installation on Windows ### Automatic Installation on Windows
1. Install [Python 3.10.6](https://www.python.org/downloads/windows/), checking "Add Python to PATH". 1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
2. Install [git](https://git-scm.com/download/win). 2. Install [git](https://git-scm.com/download/win).
3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`. 3. Download the stable-diffusion-webui repository, for example by running `git clone https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user. 4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
### Automatic Installation on Linux ### Automatic Installation on Linux
@ -115,47 +121,53 @@ sudo dnf install wget git python3
# Arch-based: # Arch-based:
sudo pacman -S wget git python3 sudo pacman -S wget git python3
``` ```
2. To install in `/home/$(whoami)/stable-diffusion-webui/`, run: 2. Navigate to the directory you would like the webui to be installed and execute the following command:
```bash ```bash
bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh) bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)
``` ```
3. Run `webui.sh`. 3. Run `webui.sh`.
4. Check `webui-user.sh` for options.
### Installation on Apple Silicon ### Installation on Apple Silicon
Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon). Find the instructions [here](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
## Contributing ## Contributing
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing) Here's how to add code to this repo: [Contributing](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
## Documentation ## Documentation
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
The documentation was moved from this README over to the project's [wiki](https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
For the purposes of getting Google and other search engines to crawl the wiki, here's a link to the (not for humans) [crawlable wiki](https://ghproxy.com/https://github-wiki-see.page/m/AUTOMATIC1111/stable-diffusion-webui/wiki).
## Credits ## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file. Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers - Stable Diffusion - https://ghproxy.com/https://github.com/CompVis/stable-diffusion, https://ghproxy.com/https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git - k-diffusion - https://ghproxy.com/https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git - GFPGAN - https://ghproxy.com/https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer - CodeFormer - https://ghproxy.com/https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN - ESRGAN - https://ghproxy.com/https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR - SwinIR - https://ghproxy.com/https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr - Swin2SR - https://ghproxy.com/https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion - LDSR - https://ghproxy.com/https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS - MiDaS - https://ghproxy.com/https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion - Ideas for optimizations - https://ghproxy.com/https://github.com/basujindal/stable-diffusion
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing. - Cross Attention layer optimization - Doggettx - https://ghproxy.com/https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion) - Cross Attention layer optimization - InvokeAI, lstein - https://ghproxy.com/https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention) - Sub-quadratic Cross Attention layer optimization - Alex Birch (https://ghproxy.com/https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://ghproxy.com/https://github.com/AminRezaei0x443/memory-efficient-attention)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas). - Textual Inversion - Rinon Gal - https://ghproxy.com/https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd - Idea for SD upscale - https://ghproxy.com/https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot - Noise generation for outpainting mk2 - https://ghproxy.com/https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator - CLIP interrogator idea and borrowing some code - https://ghproxy.com/https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch - Idea for Composable Diffusion - https://ghproxy.com/https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers - xformers - https://ghproxy.com/https://github.com/facebookresearch/xformers
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru - DeepDanbooru - interrogator for anime diffusers https://ghproxy.com/https://github.com/KichangKim/DeepDanbooru
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6) - Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://ghproxy.com/https://github.com/Birch-san/diffusers-play/tree/92feee6)
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix - Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://ghproxy.com/https://github.com/timothybrooks/instruct-pix2pix
- Security advice - RyotaK - Security advice - RyotaK
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC - UniPC sampler - Wenliang Zhao - https://ghproxy.com/https://github.com/wl-zhao/UniPC
- TAESD - Ollin Boer Bohan - https://ghproxy.com/https://github.com/madebyollin/taesd
- LyCORIS - KohakuBlueleaf
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user. - Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You) - (You)

View File

@ -1,4 +1,4 @@
# File modified by authors of InstructPix2Pix from original (https://github.com/CompVis/stable-diffusion). # File modified by authors of InstructPix2Pix from original (https://ghproxy.com/https://github.com/CompVis/stable-diffusion).
# See more details in LICENSE. # See more details in LICENSE.
model: model:

View File

@ -4,8 +4,8 @@ channels:
- defaults - defaults
dependencies: dependencies:
- python=3.10 - python=3.10
- pip=22.2.2 - pip=23.0
- cudatoolkit=11.3 - cudatoolkit=11.8
- pytorch=1.12.1 - pytorch=2.0
- torchvision=0.13.1 - torchvision=0.15
- numpy=1.23.1 - numpy=1.23

View File

@ -12,7 +12,7 @@ import safetensors.torch
from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.ddim import DDIMSampler
from ldm.util import instantiate_from_config, ismap from ldm.util import instantiate_from_config, ismap
from modules import shared, sd_hijack from modules import shared, sd_hijack, devices
cached_ldsr_model: torch.nn.Module = None cached_ldsr_model: torch.nn.Module = None
@ -88,7 +88,7 @@ class LDSR:
x_t = None x_t = None
logs = None logs = None
for n in range(n_runs): for _ in range(n_runs):
if custom_shape is not None: if custom_shape is not None:
x_t = torch.randn(1, custom_shape[1], custom_shape[2], custom_shape[3]).to(model.device) x_t = torch.randn(1, custom_shape[1], custom_shape[2], custom_shape[3]).to(model.device)
x_t = repeat(x_t, '1 c h w -> b c h w', b=custom_shape[0]) x_t = repeat(x_t, '1 c h w -> b c h w', b=custom_shape[0])
@ -110,11 +110,9 @@ class LDSR:
diffusion_steps = int(steps) diffusion_steps = int(steps)
eta = 1.0 eta = 1.0
down_sample_method = 'Lanczos'
gc.collect() gc.collect()
if torch.cuda.is_available: devices.torch_gc()
torch.cuda.empty_cache()
im_og = image im_og = image
width_og, height_og = im_og.size width_og, height_og = im_og.size
@ -151,14 +149,13 @@ class LDSR:
del model del model
gc.collect() gc.collect()
if torch.cuda.is_available: devices.torch_gc()
torch.cuda.empty_cache()
return a return a
def get_cond(selected_path): def get_cond(selected_path):
example = dict() example = {}
up_f = 4 up_f = 4
c = selected_path.convert('RGB') c = selected_path.convert('RGB')
c = torch.unsqueeze(torchvision.transforms.ToTensor()(c), 0) c = torch.unsqueeze(torchvision.transforms.ToTensor()(c), 0)
@ -196,7 +193,7 @@ def convsample_ddim(model, cond, steps, shape, eta=1.0, callback=None, normals_s
@torch.no_grad() @torch.no_grad()
def make_convolutional_sample(batch, model, custom_steps=None, eta=1.0, quantize_x0=False, custom_shape=None, temperature=1., noise_dropout=0., corrector=None, def make_convolutional_sample(batch, model, custom_steps=None, eta=1.0, quantize_x0=False, custom_shape=None, temperature=1., noise_dropout=0., corrector=None,
corrector_kwargs=None, x_T=None, ddim_use_x0_pred=False): corrector_kwargs=None, x_T=None, ddim_use_x0_pred=False):
log = dict() log = {}
z, c, x, xrec, xc = model.get_input(batch, model.first_stage_key, z, c, x, xrec, xc = model.get_input(batch, model.first_stage_key,
return_first_stage_outputs=True, return_first_stage_outputs=True,
@ -244,7 +241,7 @@ def make_convolutional_sample(batch, model, custom_steps=None, eta=1.0, quantize
x_sample_noquant = model.decode_first_stage(sample, force_not_quantize=True) x_sample_noquant = model.decode_first_stage(sample, force_not_quantize=True)
log["sample_noquant"] = x_sample_noquant log["sample_noquant"] = x_sample_noquant
log["sample_diff"] = torch.abs(x_sample_noquant - x_sample) log["sample_diff"] = torch.abs(x_sample_noquant - x_sample)
except: except Exception:
pass pass
log["sample"] = x_sample log["sample"] = x_sample

View File

@ -1,13 +1,11 @@
import os import os
import sys
import traceback
from basicsr.utils.download_util import load_file_from_url
from modules.modelloader import load_file_from_url
from modules.upscaler import Upscaler, UpscalerData from modules.upscaler import Upscaler, UpscalerData
from ldsr_model_arch import LDSR from ldsr_model_arch import LDSR
from modules import shared, script_callbacks from modules import shared, script_callbacks, errors
import sd_hijack_autoencoder, sd_hijack_ddpm_v1 import sd_hijack_autoencoder # noqa: F401
import sd_hijack_ddpm_v1 # noqa: F401
class UpscalerLDSR(Upscaler): class UpscalerLDSR(Upscaler):
@ -25,35 +23,36 @@ class UpscalerLDSR(Upscaler):
yaml_path = os.path.join(self.model_path, "project.yaml") yaml_path = os.path.join(self.model_path, "project.yaml")
old_model_path = os.path.join(self.model_path, "model.pth") old_model_path = os.path.join(self.model_path, "model.pth")
new_model_path = os.path.join(self.model_path, "model.ckpt") new_model_path = os.path.join(self.model_path, "model.ckpt")
safetensors_model_path = os.path.join(self.model_path, "model.safetensors")
local_model_paths = self.find_models(ext_filter=[".ckpt", ".safetensors"])
local_ckpt_path = next(iter([local_model for local_model in local_model_paths if local_model.endswith("model.ckpt")]), None)
local_safetensors_path = next(iter([local_model for local_model in local_model_paths if local_model.endswith("model.safetensors")]), None)
local_yaml_path = next(iter([local_model for local_model in local_model_paths if local_model.endswith("project.yaml")]), None)
if os.path.exists(yaml_path): if os.path.exists(yaml_path):
statinfo = os.stat(yaml_path) statinfo = os.stat(yaml_path)
if statinfo.st_size >= 10485760: if statinfo.st_size >= 10485760:
print("Removing invalid LDSR YAML file.") print("Removing invalid LDSR YAML file.")
os.remove(yaml_path) os.remove(yaml_path)
if os.path.exists(old_model_path): if os.path.exists(old_model_path):
print("Renaming model from model.pth to model.ckpt") print("Renaming model from model.pth to model.ckpt")
os.rename(old_model_path, new_model_path) os.rename(old_model_path, new_model_path)
if os.path.exists(safetensors_model_path):
model = safetensors_model_path
else:
model = load_file_from_url(url=self.model_url, model_dir=self.model_path,
file_name="model.ckpt", progress=True)
yaml = load_file_from_url(url=self.yaml_url, model_dir=self.model_path,
file_name="project.yaml", progress=True)
try: if local_safetensors_path is not None and os.path.exists(local_safetensors_path):
model = local_safetensors_path
else:
model = local_ckpt_path or load_file_from_url(self.model_url, model_dir=self.model_download_path, file_name="model.ckpt")
yaml = local_yaml_path or load_file_from_url(self.yaml_url, model_dir=self.model_download_path, file_name="project.yaml")
return LDSR(model, yaml) return LDSR(model, yaml)
except Exception:
print("Error importing LDSR:", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
return None
def do_upscale(self, img, path): def do_upscale(self, img, path):
try:
ldsr = self.load_model(path) ldsr = self.load_model(path)
if ldsr is None: except Exception:
print("NO LDSR!") errors.report(f"Failed loading LDSR model {path}", exc_info=True)
return img return img
ddim_steps = shared.opts.ldsr_steps ddim_steps = shared.opts.ldsr_steps
return ldsr.super_resolution(img, ddim_steps, self.scale) return ldsr.super_resolution(img, ddim_steps, self.scale)

View File

@ -1,16 +1,21 @@
# The content of this file comes from the ldm/models/autoencoder.py file of the compvis/stable-diffusion repo # The content of this file comes from the ldm/models/autoencoder.py file of the compvis/stable-diffusion repo
# The VQModel & VQModelInterface were subsequently removed from ldm/models/autoencoder.py when we moved to the stability-ai/stablediffusion repo # The VQModel & VQModelInterface were subsequently removed from ldm/models/autoencoder.py when we moved to the stability-ai/stablediffusion repo
# As the LDSR upscaler relies on VQModel & VQModelInterface, the hijack aims to put them back into the ldm.models.autoencoder # As the LDSR upscaler relies on VQModel & VQModelInterface, the hijack aims to put them back into the ldm.models.autoencoder
import numpy as np
import torch import torch
import pytorch_lightning as pl import pytorch_lightning as pl
import torch.nn.functional as F import torch.nn.functional as F
from contextlib import contextmanager from contextlib import contextmanager
from taming.modules.vqvae.quantize import VectorQuantizer2 as VectorQuantizer
from torch.optim.lr_scheduler import LambdaLR
from ldm.modules.ema import LitEma
from vqvae_quantize import VectorQuantizer2 as VectorQuantizer
from ldm.modules.diffusionmodules.model import Encoder, Decoder from ldm.modules.diffusionmodules.model import Encoder, Decoder
from ldm.util import instantiate_from_config from ldm.util import instantiate_from_config
import ldm.models.autoencoder import ldm.models.autoencoder
from packaging import version
class VQModel(pl.LightningModule): class VQModel(pl.LightningModule):
def __init__(self, def __init__(self,
@ -19,7 +24,7 @@ class VQModel(pl.LightningModule):
n_embed, n_embed,
embed_dim, embed_dim,
ckpt_path=None, ckpt_path=None,
ignore_keys=[], ignore_keys=None,
image_key="image", image_key="image",
colorize_nlabels=None, colorize_nlabels=None,
monitor=None, monitor=None,
@ -57,7 +62,7 @@ class VQModel(pl.LightningModule):
print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.")
if ckpt_path is not None: if ckpt_path is not None:
self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys) self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys or [])
self.scheduler_config = scheduler_config self.scheduler_config = scheduler_config
self.lr_g_factor = lr_g_factor self.lr_g_factor = lr_g_factor
@ -76,18 +81,19 @@ class VQModel(pl.LightningModule):
if context is not None: if context is not None:
print(f"{context}: Restored training weights") print(f"{context}: Restored training weights")
def init_from_ckpt(self, path, ignore_keys=list()): def init_from_ckpt(self, path, ignore_keys=None):
sd = torch.load(path, map_location="cpu")["state_dict"] sd = torch.load(path, map_location="cpu")["state_dict"]
keys = list(sd.keys()) keys = list(sd.keys())
for k in keys: for k in keys:
for ik in ignore_keys: for ik in ignore_keys or []:
if k.startswith(ik): if k.startswith(ik):
print("Deleting key {} from state_dict.".format(k)) print("Deleting key {} from state_dict.".format(k))
del sd[k] del sd[k]
missing, unexpected = self.load_state_dict(sd, strict=False) missing, unexpected = self.load_state_dict(sd, strict=False)
print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
if len(missing) > 0: if missing:
print(f"Missing Keys: {missing}") print(f"Missing Keys: {missing}")
if unexpected:
print(f"Unexpected Keys: {unexpected}") print(f"Unexpected Keys: {unexpected}")
def on_train_batch_end(self, *args, **kwargs): def on_train_batch_end(self, *args, **kwargs):
@ -141,7 +147,7 @@ class VQModel(pl.LightningModule):
return x return x
def training_step(self, batch, batch_idx, optimizer_idx): def training_step(self, batch, batch_idx, optimizer_idx):
# https://github.com/pytorch/pytorch/issues/37142 # https://ghproxy.com/https://github.com/pytorch/pytorch/issues/37142
# try not to fool the heuristics # try not to fool the heuristics
x = self.get_input(batch, self.image_key) x = self.get_input(batch, self.image_key)
xrec, qloss, ind = self(x, return_pred_indices=True) xrec, qloss, ind = self(x, return_pred_indices=True)
@ -165,7 +171,7 @@ class VQModel(pl.LightningModule):
def validation_step(self, batch, batch_idx): def validation_step(self, batch, batch_idx):
log_dict = self._validation_step(batch, batch_idx) log_dict = self._validation_step(batch, batch_idx)
with self.ema_scope(): with self.ema_scope():
log_dict_ema = self._validation_step(batch, batch_idx, suffix="_ema") self._validation_step(batch, batch_idx, suffix="_ema")
return log_dict return log_dict
def _validation_step(self, batch, batch_idx, suffix=""): def _validation_step(self, batch, batch_idx, suffix=""):
@ -232,7 +238,7 @@ class VQModel(pl.LightningModule):
return self.decoder.conv_out.weight return self.decoder.conv_out.weight
def log_images(self, batch, only_inputs=False, plot_ema=False, **kwargs): def log_images(self, batch, only_inputs=False, plot_ema=False, **kwargs):
log = dict() log = {}
x = self.get_input(batch, self.image_key) x = self.get_input(batch, self.image_key)
x = x.to(self.device) x = x.to(self.device)
if only_inputs: if only_inputs:
@ -249,7 +255,8 @@ class VQModel(pl.LightningModule):
if plot_ema: if plot_ema:
with self.ema_scope(): with self.ema_scope():
xrec_ema, _ = self(x) xrec_ema, _ = self(x)
if x.shape[1] > 3: xrec_ema = self.to_rgb(xrec_ema) if x.shape[1] > 3:
xrec_ema = self.to_rgb(xrec_ema)
log["reconstructions_ema"] = xrec_ema log["reconstructions_ema"] = xrec_ema
return log return log
@ -264,7 +271,7 @@ class VQModel(pl.LightningModule):
class VQModelInterface(VQModel): class VQModelInterface(VQModel):
def __init__(self, embed_dim, *args, **kwargs): def __init__(self, embed_dim, *args, **kwargs):
super().__init__(embed_dim=embed_dim, *args, **kwargs) super().__init__(*args, embed_dim=embed_dim, **kwargs)
self.embed_dim = embed_dim self.embed_dim = embed_dim
def encode(self, x): def encode(self, x):
@ -282,5 +289,5 @@ class VQModelInterface(VQModel):
dec = self.decoder(quant) dec = self.decoder(quant)
return dec return dec
setattr(ldm.models.autoencoder, "VQModel", VQModel) ldm.models.autoencoder.VQModel = VQModel
setattr(ldm.models.autoencoder, "VQModelInterface", VQModelInterface) ldm.models.autoencoder.VQModelInterface = VQModelInterface

View File

@ -48,7 +48,7 @@ class DDPMV1(pl.LightningModule):
beta_schedule="linear", beta_schedule="linear",
loss_type="l2", loss_type="l2",
ckpt_path=None, ckpt_path=None,
ignore_keys=[], ignore_keys=None,
load_only_unet=False, load_only_unet=False,
monitor="val/loss", monitor="val/loss",
use_ema=True, use_ema=True,
@ -100,7 +100,7 @@ class DDPMV1(pl.LightningModule):
if monitor is not None: if monitor is not None:
self.monitor = monitor self.monitor = monitor
if ckpt_path is not None: if ckpt_path is not None:
self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet) self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys or [], only_model=load_only_unet)
self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps, self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps,
linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s) linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s)
@ -182,22 +182,22 @@ class DDPMV1(pl.LightningModule):
if context is not None: if context is not None:
print(f"{context}: Restored training weights") print(f"{context}: Restored training weights")
def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): def init_from_ckpt(self, path, ignore_keys=None, only_model=False):
sd = torch.load(path, map_location="cpu") sd = torch.load(path, map_location="cpu")
if "state_dict" in list(sd.keys()): if "state_dict" in list(sd.keys()):
sd = sd["state_dict"] sd = sd["state_dict"]
keys = list(sd.keys()) keys = list(sd.keys())
for k in keys: for k in keys:
for ik in ignore_keys: for ik in ignore_keys or []:
if k.startswith(ik): if k.startswith(ik):
print("Deleting key {} from state_dict.".format(k)) print("Deleting key {} from state_dict.".format(k))
del sd[k] del sd[k]
missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(
sd, strict=False) sd, strict=False)
print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
if len(missing) > 0: if missing:
print(f"Missing Keys: {missing}") print(f"Missing Keys: {missing}")
if len(unexpected) > 0: if unexpected:
print(f"Unexpected Keys: {unexpected}") print(f"Unexpected Keys: {unexpected}")
def q_mean_variance(self, x_start, t): def q_mean_variance(self, x_start, t):
@ -375,7 +375,7 @@ class DDPMV1(pl.LightningModule):
@torch.no_grad() @torch.no_grad()
def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs): def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs):
log = dict() log = {}
x = self.get_input(batch, self.first_stage_key) x = self.get_input(batch, self.first_stage_key)
N = min(x.shape[0], N) N = min(x.shape[0], N)
n_row = min(x.shape[0], n_row) n_row = min(x.shape[0], n_row)
@ -383,7 +383,7 @@ class DDPMV1(pl.LightningModule):
log["inputs"] = x log["inputs"] = x
# get diffusion row # get diffusion row
diffusion_row = list() diffusion_row = []
x_start = x[:n_row] x_start = x[:n_row]
for t in range(self.num_timesteps): for t in range(self.num_timesteps):
@ -444,13 +444,13 @@ class LatentDiffusionV1(DDPMV1):
conditioning_key = None conditioning_key = None
ckpt_path = kwargs.pop("ckpt_path", None) ckpt_path = kwargs.pop("ckpt_path", None)
ignore_keys = kwargs.pop("ignore_keys", []) ignore_keys = kwargs.pop("ignore_keys", [])
super().__init__(conditioning_key=conditioning_key, *args, **kwargs) super().__init__(*args, conditioning_key=conditioning_key, **kwargs)
self.concat_mode = concat_mode self.concat_mode = concat_mode
self.cond_stage_trainable = cond_stage_trainable self.cond_stage_trainable = cond_stage_trainable
self.cond_stage_key = cond_stage_key self.cond_stage_key = cond_stage_key
try: try:
self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1 self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1
except: except Exception:
self.num_downs = 0 self.num_downs = 0
if not scale_by_std: if not scale_by_std:
self.scale_factor = scale_factor self.scale_factor = scale_factor
@ -877,16 +877,6 @@ class LatentDiffusionV1(DDPMV1):
c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float())) c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float()))
return self.p_losses(x, c, t, *args, **kwargs) return self.p_losses(x, c, t, *args, **kwargs)
def _rescale_annotations(self, bboxes, crop_coordinates): # TODO: move to dataset
def rescale_bbox(bbox):
x0 = clamp((bbox[0] - crop_coordinates[0]) / crop_coordinates[2])
y0 = clamp((bbox[1] - crop_coordinates[1]) / crop_coordinates[3])
w = min(bbox[2] / crop_coordinates[2], 1 - x0)
h = min(bbox[3] / crop_coordinates[3], 1 - y0)
return x0, y0, w, h
return [rescale_bbox(b) for b in bboxes]
def apply_model(self, x_noisy, t, cond, return_ids=False): def apply_model(self, x_noisy, t, cond, return_ids=False):
if isinstance(cond, dict): if isinstance(cond, dict):
@ -1126,7 +1116,7 @@ class LatentDiffusionV1(DDPMV1):
if cond is not None: if cond is not None:
if isinstance(cond, dict): if isinstance(cond, dict):
cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
list(map(lambda x: x[:batch_size], cond[key])) for key in cond} [x[:batch_size] for x in cond[key]] for key in cond}
else: else:
cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
@ -1157,8 +1147,10 @@ class LatentDiffusionV1(DDPMV1):
if i % log_every_t == 0 or i == timesteps - 1: if i % log_every_t == 0 or i == timesteps - 1:
intermediates.append(x0_partial) intermediates.append(x0_partial)
if callback: callback(i) if callback:
if img_callback: img_callback(img, i) callback(i)
if img_callback:
img_callback(img, i)
return img, intermediates return img, intermediates
@torch.no_grad() @torch.no_grad()
@ -1205,8 +1197,10 @@ class LatentDiffusionV1(DDPMV1):
if i % log_every_t == 0 or i == timesteps - 1: if i % log_every_t == 0 or i == timesteps - 1:
intermediates.append(img) intermediates.append(img)
if callback: callback(i) if callback:
if img_callback: img_callback(img, i) callback(i)
if img_callback:
img_callback(img, i)
if return_intermediates: if return_intermediates:
return img, intermediates return img, intermediates
@ -1221,7 +1215,7 @@ class LatentDiffusionV1(DDPMV1):
if cond is not None: if cond is not None:
if isinstance(cond, dict): if isinstance(cond, dict):
cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
list(map(lambda x: x[:batch_size], cond[key])) for key in cond} [x[:batch_size] for x in cond[key]] for key in cond}
else: else:
cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
return self.p_sample_loop(cond, return self.p_sample_loop(cond,
@ -1253,7 +1247,7 @@ class LatentDiffusionV1(DDPMV1):
use_ddim = ddim_steps is not None use_ddim = ddim_steps is not None
log = dict() log = {}
z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key,
return_first_stage_outputs=True, return_first_stage_outputs=True,
force_c_encode=True, force_c_encode=True,
@ -1280,7 +1274,7 @@ class LatentDiffusionV1(DDPMV1):
if plot_diffusion_rows: if plot_diffusion_rows:
# get diffusion row # get diffusion row
diffusion_row = list() diffusion_row = []
z_start = z[:n_row] z_start = z[:n_row]
for t in range(self.num_timesteps): for t in range(self.num_timesteps):
if t % self.log_every_t == 0 or t == self.num_timesteps - 1: if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
@ -1322,7 +1316,7 @@ class LatentDiffusionV1(DDPMV1):
if inpaint: if inpaint:
# make a simple center square # make a simple center square
b, h, w = z.shape[0], z.shape[2], z.shape[3] h, w = z.shape[2], z.shape[3]
mask = torch.ones(N, h, w).to(self.device) mask = torch.ones(N, h, w).to(self.device)
# zeros will be filled in # zeros will be filled in
mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0. mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0.
@ -1424,10 +1418,10 @@ class Layout2ImgDiffusionV1(LatentDiffusionV1):
# TODO: move all layout-specific hacks to this class # TODO: move all layout-specific hacks to this class
def __init__(self, cond_stage_key, *args, **kwargs): def __init__(self, cond_stage_key, *args, **kwargs):
assert cond_stage_key == 'coordinates_bbox', 'Layout2ImgDiffusion only for cond_stage_key="coordinates_bbox"' assert cond_stage_key == 'coordinates_bbox', 'Layout2ImgDiffusion only for cond_stage_key="coordinates_bbox"'
super().__init__(cond_stage_key=cond_stage_key, *args, **kwargs) super().__init__(*args, cond_stage_key=cond_stage_key, **kwargs)
def log_images(self, batch, N=8, *args, **kwargs): def log_images(self, batch, N=8, *args, **kwargs):
logs = super().log_images(batch=batch, N=N, *args, **kwargs) logs = super().log_images(*args, batch=batch, N=N, **kwargs)
key = 'train' if self.training else 'validation' key = 'train' if self.training else 'validation'
dset = self.trainer.datamodule.datasets[key] dset = self.trainer.datamodule.datasets[key]
@ -1443,7 +1437,7 @@ class Layout2ImgDiffusionV1(LatentDiffusionV1):
logs['bbox_image'] = cond_img logs['bbox_image'] = cond_img
return logs return logs
setattr(ldm.models.diffusion.ddpm, "DDPMV1", DDPMV1) ldm.models.diffusion.ddpm.DDPMV1 = DDPMV1
setattr(ldm.models.diffusion.ddpm, "LatentDiffusionV1", LatentDiffusionV1) ldm.models.diffusion.ddpm.LatentDiffusionV1 = LatentDiffusionV1
setattr(ldm.models.diffusion.ddpm, "DiffusionWrapperV1", DiffusionWrapperV1) ldm.models.diffusion.ddpm.DiffusionWrapperV1 = DiffusionWrapperV1
setattr(ldm.models.diffusion.ddpm, "Layout2ImgDiffusionV1", Layout2ImgDiffusionV1) ldm.models.diffusion.ddpm.Layout2ImgDiffusionV1 = Layout2ImgDiffusionV1

View File

@ -0,0 +1,147 @@
# Vendored from https://raw.githubusercontent.com/CompVis/taming-transformers/24268930bf1dce879235a7fddd0b2355b84d7ea6/taming/modules/vqvae/quantize.py,
# where the license is as follows:
#
# Copyright (c) 2020 Patrick Esser and Robin Rombach and Björn Ommer
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE
# OR OTHER DEALINGS IN THE SOFTWARE./
import torch
import torch.nn as nn
import numpy as np
from einops import rearrange
class VectorQuantizer2(nn.Module):
"""
Improved version over VectorQuantizer, can be used as a drop-in replacement. Mostly
avoids costly matrix multiplications and allows for post-hoc remapping of indices.
"""
# NOTE: due to a bug the beta term was applied to the wrong term. for
# backwards compatibility we use the buggy version by default, but you can
# specify legacy=False to fix it.
def __init__(self, n_e, e_dim, beta, remap=None, unknown_index="random",
sane_index_shape=False, legacy=True):
super().__init__()
self.n_e = n_e
self.e_dim = e_dim
self.beta = beta
self.legacy = legacy
self.embedding = nn.Embedding(self.n_e, self.e_dim)
self.embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e)
self.remap = remap
if self.remap is not None:
self.register_buffer("used", torch.tensor(np.load(self.remap)))
self.re_embed = self.used.shape[0]
self.unknown_index = unknown_index # "random" or "extra" or integer
if self.unknown_index == "extra":
self.unknown_index = self.re_embed
self.re_embed = self.re_embed + 1
print(f"Remapping {self.n_e} indices to {self.re_embed} indices. "
f"Using {self.unknown_index} for unknown indices.")
else:
self.re_embed = n_e
self.sane_index_shape = sane_index_shape
def remap_to_used(self, inds):
ishape = inds.shape
assert len(ishape) > 1
inds = inds.reshape(ishape[0], -1)
used = self.used.to(inds)
match = (inds[:, :, None] == used[None, None, ...]).long()
new = match.argmax(-1)
unknown = match.sum(2) < 1
if self.unknown_index == "random":
new[unknown] = torch.randint(0, self.re_embed, size=new[unknown].shape).to(device=new.device)
else:
new[unknown] = self.unknown_index
return new.reshape(ishape)
def unmap_to_all(self, inds):
ishape = inds.shape
assert len(ishape) > 1
inds = inds.reshape(ishape[0], -1)
used = self.used.to(inds)
if self.re_embed > self.used.shape[0]: # extra token
inds[inds >= self.used.shape[0]] = 0 # simply set to zero
back = torch.gather(used[None, :][inds.shape[0] * [0], :], 1, inds)
return back.reshape(ishape)
def forward(self, z, temp=None, rescale_logits=False, return_logits=False):
assert temp is None or temp == 1.0, "Only for interface compatible with Gumbel"
assert rescale_logits is False, "Only for interface compatible with Gumbel"
assert return_logits is False, "Only for interface compatible with Gumbel"
# reshape z -> (batch, height, width, channel) and flatten
z = rearrange(z, 'b c h w -> b h w c').contiguous()
z_flattened = z.view(-1, self.e_dim)
# distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z
d = torch.sum(z_flattened ** 2, dim=1, keepdim=True) + \
torch.sum(self.embedding.weight ** 2, dim=1) - 2 * \
torch.einsum('bd,dn->bn', z_flattened, rearrange(self.embedding.weight, 'n d -> d n'))
min_encoding_indices = torch.argmin(d, dim=1)
z_q = self.embedding(min_encoding_indices).view(z.shape)
perplexity = None
min_encodings = None
# compute loss for embedding
if not self.legacy:
loss = self.beta * torch.mean((z_q.detach() - z) ** 2) + \
torch.mean((z_q - z.detach()) ** 2)
else:
loss = torch.mean((z_q.detach() - z) ** 2) + self.beta * \
torch.mean((z_q - z.detach()) ** 2)
# preserve gradients
z_q = z + (z_q - z).detach()
# reshape back to match original input shape
z_q = rearrange(z_q, 'b h w c -> b c h w').contiguous()
if self.remap is not None:
min_encoding_indices = min_encoding_indices.reshape(z.shape[0], -1) # add batch axis
min_encoding_indices = self.remap_to_used(min_encoding_indices)
min_encoding_indices = min_encoding_indices.reshape(-1, 1) # flatten
if self.sane_index_shape:
min_encoding_indices = min_encoding_indices.reshape(
z_q.shape[0], z_q.shape[2], z_q.shape[3])
return z_q, loss, (perplexity, min_encodings, min_encoding_indices)
def get_codebook_entry(self, indices, shape):
# shape specifying (batch, height, width, channel)
if self.remap is not None:
indices = indices.reshape(shape[0], -1) # add batch axis
indices = self.unmap_to_all(indices)
indices = indices.reshape(-1) # flatten again
# get quantized latent vectors
z_q = self.embedding(indices)
if shape is not None:
z_q = z_q.view(shape)
# reshape back to match original input shape
z_q = z_q.permute(0, 3, 1, 2).contiguous()
return z_q

View File

@ -1,5 +1,6 @@
from modules import extra_networks, shared from modules import extra_networks, shared
import lora import networks
class ExtraNetworkLora(extra_networks.ExtraNetwork): class ExtraNetworkLora(extra_networks.ExtraNetwork):
def __init__(self): def __init__(self):
@ -8,19 +9,51 @@ class ExtraNetworkLora(extra_networks.ExtraNetwork):
def activate(self, p, params_list): def activate(self, p, params_list):
additional = shared.opts.sd_lora additional = shared.opts.sd_lora
if additional != "" and additional in lora.available_loras and len([x for x in params_list if x.items[0] == additional]) == 0: if additional != "None" and additional in networks.available_networks and not any(x for x in params_list if x.items[0] == additional):
p.all_prompts = [x + f"<lora:{additional}:{shared.opts.extra_networks_default_multiplier}>" for x in p.all_prompts] p.all_prompts = [x + f"<lora:{additional}:{shared.opts.extra_networks_default_multiplier}>" for x in p.all_prompts]
params_list.append(extra_networks.ExtraNetworkParams(items=[additional, shared.opts.extra_networks_default_multiplier])) params_list.append(extra_networks.ExtraNetworkParams(items=[additional, shared.opts.extra_networks_default_multiplier]))
names = [] names = []
multipliers = [] te_multipliers = []
unet_multipliers = []
dyn_dims = []
for params in params_list: for params in params_list:
assert len(params.items) > 0 assert params.items
names.append(params.items[0]) names.append(params.positional[0])
multipliers.append(float(params.items[1]) if len(params.items) > 1 else 1.0)
lora.load_loras(names, multipliers) te_multiplier = float(params.positional[1]) if len(params.positional) > 1 else 1.0
te_multiplier = float(params.named.get("te", te_multiplier))
unet_multiplier = float(params.positional[2]) if len(params.positional) > 2 else te_multiplier
unet_multiplier = float(params.named.get("unet", unet_multiplier))
dyn_dim = int(params.positional[3]) if len(params.positional) > 3 else None
dyn_dim = int(params.named["dyn"]) if "dyn" in params.named else dyn_dim
te_multipliers.append(te_multiplier)
unet_multipliers.append(unet_multiplier)
dyn_dims.append(dyn_dim)
networks.load_networks(names, te_multipliers, unet_multipliers, dyn_dims)
if shared.opts.lora_add_hashes_to_infotext:
network_hashes = []
for item in networks.loaded_networks:
shorthash = item.network_on_disk.shorthash
if not shorthash:
continue
alias = item.mentioned_name
if not alias:
continue
alias = alias.replace(":", "").replace(",", "")
network_hashes.append(f"{alias}: {shorthash}")
if network_hashes:
p.extra_generation_params["Lora hashes"] = ", ".join(network_hashes)
def deactivate(self, p): def deactivate(self, p):
pass pass

View File

@ -1,362 +1,9 @@
import glob import networks
import os
import re
import torch
from typing import Union
from modules import shared, devices, sd_models, errors list_available_loras = networks.list_available_networks
metadata_tags_order = {"ss_sd_model_name": 1, "ss_resolution": 2, "ss_clip_skip": 3, "ss_num_train_images": 10, "ss_tag_frequency": 20} available_loras = networks.available_networks
available_lora_aliases = networks.available_network_aliases
re_digits = re.compile(r"\d+") available_lora_hash_lookup = networks.available_network_hash_lookup
re_x_proj = re.compile(r"(.*)_([qkv]_proj)$") forbidden_lora_aliases = networks.forbidden_network_aliases
re_compiled = {} loaded_loras = networks.loaded_networks
suffix_conversion = {
"attentions": {},
"resnets": {
"conv1": "in_layers_2",
"conv2": "out_layers_3",
"time_emb_proj": "emb_layers_1",
"conv_shortcut": "skip_connection",
}
}
def convert_diffusers_name_to_compvis(key, is_sd2):
def match(match_list, regex_text):
regex = re_compiled.get(regex_text)
if regex is None:
regex = re.compile(regex_text)
re_compiled[regex_text] = regex
r = re.match(regex, key)
if not r:
return False
match_list.clear()
match_list.extend([int(x) if re.match(re_digits, x) else x for x in r.groups()])
return True
m = []
if match(m, r"lora_unet_down_blocks_(\d+)_(attentions|resnets)_(\d+)_(.+)"):
suffix = suffix_conversion.get(m[1], {}).get(m[3], m[3])
return f"diffusion_model_input_blocks_{1 + m[0] * 3 + m[2]}_{1 if m[1] == 'attentions' else 0}_{suffix}"
if match(m, r"lora_unet_mid_block_(attentions|resnets)_(\d+)_(.+)"):
suffix = suffix_conversion.get(m[0], {}).get(m[2], m[2])
return f"diffusion_model_middle_block_{1 if m[0] == 'attentions' else m[1] * 2}_{suffix}"
if match(m, r"lora_unet_up_blocks_(\d+)_(attentions|resnets)_(\d+)_(.+)"):
suffix = suffix_conversion.get(m[1], {}).get(m[3], m[3])
return f"diffusion_model_output_blocks_{m[0] * 3 + m[2]}_{1 if m[1] == 'attentions' else 0}_{suffix}"
if match(m, r"lora_unet_down_blocks_(\d+)_downsamplers_0_conv"):
return f"diffusion_model_input_blocks_{3 + m[0] * 3}_0_op"
if match(m, r"lora_unet_up_blocks_(\d+)_upsamplers_0_conv"):
return f"diffusion_model_output_blocks_{2 + m[0] * 3}_{2 if m[0]>0 else 1}_conv"
if match(m, r"lora_te_text_model_encoder_layers_(\d+)_(.+)"):
if is_sd2:
if 'mlp_fc1' in m[1]:
return f"model_transformer_resblocks_{m[0]}_{m[1].replace('mlp_fc1', 'mlp_c_fc')}"
elif 'mlp_fc2' in m[1]:
return f"model_transformer_resblocks_{m[0]}_{m[1].replace('mlp_fc2', 'mlp_c_proj')}"
else:
return f"model_transformer_resblocks_{m[0]}_{m[1].replace('self_attn', 'attn')}"
return f"transformer_text_model_encoder_layers_{m[0]}_{m[1]}"
return key
class LoraOnDisk:
def __init__(self, name, filename):
self.name = name
self.filename = filename
self.metadata = {}
_, ext = os.path.splitext(filename)
if ext.lower() == ".safetensors":
try:
self.metadata = sd_models.read_metadata_from_safetensors(filename)
except Exception as e:
errors.display(e, f"reading lora {filename}")
if self.metadata:
m = {}
for k, v in sorted(self.metadata.items(), key=lambda x: metadata_tags_order.get(x[0], 999)):
m[k] = v
self.metadata = m
self.ssmd_cover_images = self.metadata.pop('ssmd_cover_images', None) # those are cover images and they are too big to display in UI as text
class LoraModule:
def __init__(self, name):
self.name = name
self.multiplier = 1.0
self.modules = {}
self.mtime = None
class LoraUpDownModule:
def __init__(self):
self.up = None
self.down = None
self.alpha = None
def assign_lora_names_to_compvis_modules(sd_model):
lora_layer_mapping = {}
for name, module in shared.sd_model.cond_stage_model.wrapped.named_modules():
lora_name = name.replace(".", "_")
lora_layer_mapping[lora_name] = module
module.lora_layer_name = lora_name
for name, module in shared.sd_model.model.named_modules():
lora_name = name.replace(".", "_")
lora_layer_mapping[lora_name] = module
module.lora_layer_name = lora_name
sd_model.lora_layer_mapping = lora_layer_mapping
def load_lora(name, filename):
lora = LoraModule(name)
lora.mtime = os.path.getmtime(filename)
sd = sd_models.read_state_dict(filename)
keys_failed_to_match = {}
is_sd2 = 'model_transformer_resblocks' in shared.sd_model.lora_layer_mapping
for key_diffusers, weight in sd.items():
key_diffusers_without_lora_parts, lora_key = key_diffusers.split(".", 1)
key = convert_diffusers_name_to_compvis(key_diffusers_without_lora_parts, is_sd2)
sd_module = shared.sd_model.lora_layer_mapping.get(key, None)
if sd_module is None:
m = re_x_proj.match(key)
if m:
sd_module = shared.sd_model.lora_layer_mapping.get(m.group(1), None)
if sd_module is None:
keys_failed_to_match[key_diffusers] = key
continue
lora_module = lora.modules.get(key, None)
if lora_module is None:
lora_module = LoraUpDownModule()
lora.modules[key] = lora_module
if lora_key == "alpha":
lora_module.alpha = weight.item()
continue
if type(sd_module) == torch.nn.Linear:
module = torch.nn.Linear(weight.shape[1], weight.shape[0], bias=False)
elif type(sd_module) == torch.nn.modules.linear.NonDynamicallyQuantizableLinear:
module = torch.nn.Linear(weight.shape[1], weight.shape[0], bias=False)
elif type(sd_module) == torch.nn.MultiheadAttention:
module = torch.nn.Linear(weight.shape[1], weight.shape[0], bias=False)
elif type(sd_module) == torch.nn.Conv2d:
module = torch.nn.Conv2d(weight.shape[1], weight.shape[0], (1, 1), bias=False)
else:
print(f'Lora layer {key_diffusers} matched a layer with unsupported type: {type(sd_module).__name__}')
continue
assert False, f'Lora layer {key_diffusers} matched a layer with unsupported type: {type(sd_module).__name__}'
with torch.no_grad():
module.weight.copy_(weight)
module.to(device=devices.cpu, dtype=devices.dtype)
if lora_key == "lora_up.weight":
lora_module.up = module
elif lora_key == "lora_down.weight":
lora_module.down = module
else:
assert False, f'Bad Lora layer name: {key_diffusers} - must end in lora_up.weight, lora_down.weight or alpha'
if len(keys_failed_to_match) > 0:
print(f"Failed to match keys when loading Lora {filename}: {keys_failed_to_match}")
return lora
def load_loras(names, multipliers=None):
already_loaded = {}
for lora in loaded_loras:
if lora.name in names:
already_loaded[lora.name] = lora
loaded_loras.clear()
loras_on_disk = [available_loras.get(name, None) for name in names]
if any([x is None for x in loras_on_disk]):
list_available_loras()
loras_on_disk = [available_loras.get(name, None) for name in names]
for i, name in enumerate(names):
lora = already_loaded.get(name, None)
lora_on_disk = loras_on_disk[i]
if lora_on_disk is not None:
if lora is None or os.path.getmtime(lora_on_disk.filename) > lora.mtime:
lora = load_lora(name, lora_on_disk.filename)
if lora is None:
print(f"Couldn't find Lora with name {name}")
continue
lora.multiplier = multipliers[i] if multipliers else 1.0
loaded_loras.append(lora)
def lora_calc_updown(lora, module, target):
with torch.no_grad():
up = module.up.weight.to(target.device, dtype=target.dtype)
down = module.down.weight.to(target.device, dtype=target.dtype)
if up.shape[2:] == (1, 1) and down.shape[2:] == (1, 1):
updown = (up.squeeze(2).squeeze(2) @ down.squeeze(2).squeeze(2)).unsqueeze(2).unsqueeze(3)
else:
updown = up @ down
updown = updown * lora.multiplier * (module.alpha / module.up.weight.shape[1] if module.alpha else 1.0)
return updown
def lora_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn.MultiheadAttention]):
"""
Applies the currently selected set of Loras to the weights of torch layer self.
If weights already have this particular set of loras applied, does nothing.
If not, restores orginal weights from backup and alters weights according to loras.
"""
lora_layer_name = getattr(self, 'lora_layer_name', None)
if lora_layer_name is None:
return
current_names = getattr(self, "lora_current_names", ())
wanted_names = tuple((x.name, x.multiplier) for x in loaded_loras)
weights_backup = getattr(self, "lora_weights_backup", None)
if weights_backup is None:
if isinstance(self, torch.nn.MultiheadAttention):
weights_backup = (self.in_proj_weight.to(devices.cpu, copy=True), self.out_proj.weight.to(devices.cpu, copy=True))
else:
weights_backup = self.weight.to(devices.cpu, copy=True)
self.lora_weights_backup = weights_backup
if current_names != wanted_names:
if weights_backup is not None:
if isinstance(self, torch.nn.MultiheadAttention):
self.in_proj_weight.copy_(weights_backup[0])
self.out_proj.weight.copy_(weights_backup[1])
else:
self.weight.copy_(weights_backup)
for lora in loaded_loras:
module = lora.modules.get(lora_layer_name, None)
if module is not None and hasattr(self, 'weight'):
self.weight += lora_calc_updown(lora, module, self.weight)
continue
module_q = lora.modules.get(lora_layer_name + "_q_proj", None)
module_k = lora.modules.get(lora_layer_name + "_k_proj", None)
module_v = lora.modules.get(lora_layer_name + "_v_proj", None)
module_out = lora.modules.get(lora_layer_name + "_out_proj", None)
if isinstance(self, torch.nn.MultiheadAttention) and module_q and module_k and module_v and module_out:
updown_q = lora_calc_updown(lora, module_q, self.in_proj_weight)
updown_k = lora_calc_updown(lora, module_k, self.in_proj_weight)
updown_v = lora_calc_updown(lora, module_v, self.in_proj_weight)
updown_qkv = torch.vstack([updown_q, updown_k, updown_v])
self.in_proj_weight += updown_qkv
self.out_proj.weight += lora_calc_updown(lora, module_out, self.out_proj.weight)
continue
if module is None:
continue
print(f'failed to calculate lora weights for layer {lora_layer_name}')
setattr(self, "lora_current_names", wanted_names)
def lora_reset_cached_weight(self: Union[torch.nn.Conv2d, torch.nn.Linear]):
setattr(self, "lora_current_names", ())
setattr(self, "lora_weights_backup", None)
def lora_Linear_forward(self, input):
lora_apply_weights(self)
return torch.nn.Linear_forward_before_lora(self, input)
def lora_Linear_load_state_dict(self, *args, **kwargs):
lora_reset_cached_weight(self)
return torch.nn.Linear_load_state_dict_before_lora(self, *args, **kwargs)
def lora_Conv2d_forward(self, input):
lora_apply_weights(self)
return torch.nn.Conv2d_forward_before_lora(self, input)
def lora_Conv2d_load_state_dict(self, *args, **kwargs):
lora_reset_cached_weight(self)
return torch.nn.Conv2d_load_state_dict_before_lora(self, *args, **kwargs)
def lora_MultiheadAttention_forward(self, *args, **kwargs):
lora_apply_weights(self)
return torch.nn.MultiheadAttention_forward_before_lora(self, *args, **kwargs)
def lora_MultiheadAttention_load_state_dict(self, *args, **kwargs):
lora_reset_cached_weight(self)
return torch.nn.MultiheadAttention_load_state_dict_before_lora(self, *args, **kwargs)
def list_available_loras():
available_loras.clear()
os.makedirs(shared.cmd_opts.lora_dir, exist_ok=True)
candidates = \
glob.glob(os.path.join(shared.cmd_opts.lora_dir, '**/*.pt'), recursive=True) + \
glob.glob(os.path.join(shared.cmd_opts.lora_dir, '**/*.safetensors'), recursive=True) + \
glob.glob(os.path.join(shared.cmd_opts.lora_dir, '**/*.ckpt'), recursive=True)
for filename in sorted(candidates, key=str.lower):
if os.path.isdir(filename):
continue
name = os.path.splitext(os.path.basename(filename))[0]
available_loras[name] = LoraOnDisk(name, filename)
available_loras = {}
loaded_loras = []
list_available_loras()

View File

@ -0,0 +1,21 @@
import torch
def make_weight_cp(t, wa, wb):
temp = torch.einsum('i j k l, j r -> i r k l', t, wb)
return torch.einsum('i j k l, i r -> r j k l', temp, wa)
def rebuild_conventional(up, down, shape, dyn_dim=None):
up = up.reshape(up.size(0), -1)
down = down.reshape(down.size(0), -1)
if dyn_dim is not None:
up = up[:, :dyn_dim]
down = down[:dyn_dim, :]
return (up @ down).reshape(shape)
def rebuild_cp_decomposition(up, down, mid):
up = up.reshape(up.size(0), -1)
down = down.reshape(down.size(0), -1)
return torch.einsum('n m k l, i n, m j -> i j k l', mid, up, down)

View File

@ -0,0 +1,155 @@
from __future__ import annotations
import os
from collections import namedtuple
import enum
from modules import sd_models, cache, errors, hashes, shared
NetworkWeights = namedtuple('NetworkWeights', ['network_key', 'sd_key', 'w', 'sd_module'])
metadata_tags_order = {"ss_sd_model_name": 1, "ss_resolution": 2, "ss_clip_skip": 3, "ss_num_train_images": 10, "ss_tag_frequency": 20}
class SdVersion(enum.Enum):
Unknown = 1
SD1 = 2
SD2 = 3
SDXL = 4
class NetworkOnDisk:
def __init__(self, name, filename):
self.name = name
self.filename = filename
self.metadata = {}
self.is_safetensors = os.path.splitext(filename)[1].lower() == ".safetensors"
def read_metadata():
metadata = sd_models.read_metadata_from_safetensors(filename)
metadata.pop('ssmd_cover_images', None) # those are cover images, and they are too big to display in UI as text
return metadata
if self.is_safetensors:
try:
self.metadata = cache.cached_data_for_file('safetensors-metadata', "lora/" + self.name, filename, read_metadata)
except Exception as e:
errors.display(e, f"reading lora {filename}")
if self.metadata:
m = {}
for k, v in sorted(self.metadata.items(), key=lambda x: metadata_tags_order.get(x[0], 999)):
m[k] = v
self.metadata = m
self.alias = self.metadata.get('ss_output_name', self.name)
self.hash = None
self.shorthash = None
self.set_hash(
self.metadata.get('sshs_model_hash') or
hashes.sha256_from_cache(self.filename, "lora/" + self.name, use_addnet_hash=self.is_safetensors) or
''
)
self.sd_version = self.detect_version()
def detect_version(self):
if str(self.metadata.get('ss_base_model_version', "")).startswith("sdxl_"):
return SdVersion.SDXL
elif str(self.metadata.get('ss_v2', "")) == "True":
return SdVersion.SD2
elif len(self.metadata):
return SdVersion.SD1
return SdVersion.Unknown
def set_hash(self, v):
self.hash = v
self.shorthash = self.hash[0:12]
if self.shorthash:
import networks
networks.available_network_hash_lookup[self.shorthash] = self
def read_hash(self):
if not self.hash:
self.set_hash(hashes.sha256(self.filename, "lora/" + self.name, use_addnet_hash=self.is_safetensors) or '')
def get_alias(self):
import networks
if shared.opts.lora_preferred_name == "Filename" or self.alias.lower() in networks.forbidden_network_aliases:
return self.name
else:
return self.alias
class Network: # LoraModule
def __init__(self, name, network_on_disk: NetworkOnDisk):
self.name = name
self.network_on_disk = network_on_disk
self.te_multiplier = 1.0
self.unet_multiplier = 1.0
self.dyn_dim = None
self.modules = {}
self.mtime = None
self.mentioned_name = None
"""the text that was used to add the network to prompt - can be either name or an alias"""
class ModuleType:
def create_module(self, net: Network, weights: NetworkWeights) -> Network | None:
return None
class NetworkModule:
def __init__(self, net: Network, weights: NetworkWeights):
self.network = net
self.network_key = weights.network_key
self.sd_key = weights.sd_key
self.sd_module = weights.sd_module
if hasattr(self.sd_module, 'weight'):
self.shape = self.sd_module.weight.shape
self.dim = None
self.bias = weights.w.get("bias")
self.alpha = weights.w["alpha"].item() if "alpha" in weights.w else None
self.scale = weights.w["scale"].item() if "scale" in weights.w else None
def multiplier(self):
if 'transformer' in self.sd_key[:20]:
return self.network.te_multiplier
else:
return self.network.unet_multiplier
def calc_scale(self):
if self.scale is not None:
return self.scale
if self.dim is not None and self.alpha is not None:
return self.alpha / self.dim
return 1.0
def finalize_updown(self, updown, orig_weight, output_shape):
if self.bias is not None:
updown = updown.reshape(self.bias.shape)
updown += self.bias.to(orig_weight.device, dtype=orig_weight.dtype)
updown = updown.reshape(output_shape)
if len(output_shape) == 4:
updown = updown.reshape(output_shape)
if orig_weight.size().numel() == updown.size().numel():
updown = updown.reshape(orig_weight.shape)
return updown * self.calc_scale() * self.multiplier()
def calc_updown(self, target):
raise NotImplementedError()
def forward(self, x, y):
raise NotImplementedError()

View File

@ -0,0 +1,22 @@
import network
class ModuleTypeFull(network.ModuleType):
def create_module(self, net: network.Network, weights: network.NetworkWeights):
if all(x in weights.w for x in ["diff"]):
return NetworkModuleFull(net, weights)
return None
class NetworkModuleFull(network.NetworkModule):
def __init__(self, net: network.Network, weights: network.NetworkWeights):
super().__init__(net, weights)
self.weight = weights.w.get("diff")
def calc_updown(self, orig_weight):
output_shape = self.weight.shape
updown = self.weight.to(orig_weight.device, dtype=orig_weight.dtype)
return self.finalize_updown(updown, orig_weight, output_shape)

View File

@ -0,0 +1,55 @@
import lyco_helpers
import network
class ModuleTypeHada(network.ModuleType):
def create_module(self, net: network.Network, weights: network.NetworkWeights):
if all(x in weights.w for x in ["hada_w1_a", "hada_w1_b", "hada_w2_a", "hada_w2_b"]):
return NetworkModuleHada(net, weights)
return None
class NetworkModuleHada(network.NetworkModule):
def __init__(self, net: network.Network, weights: network.NetworkWeights):
super().__init__(net, weights)
if hasattr(self.sd_module, 'weight'):
self.shape = self.sd_module.weight.shape
self.w1a = weights.w["hada_w1_a"]
self.w1b = weights.w["hada_w1_b"]
self.dim = self.w1b.shape[0]
self.w2a = weights.w["hada_w2_a"]
self.w2b = weights.w["hada_w2_b"]
self.t1 = weights.w.get("hada_t1")
self.t2 = weights.w.get("hada_t2")
def calc_updown(self, orig_weight):
w1a = self.w1a.to(orig_weight.device, dtype=orig_weight.dtype)
w1b = self.w1b.to(orig_weight.device, dtype=orig_weight.dtype)
w2a = self.w2a.to(orig_weight.device, dtype=orig_weight.dtype)
w2b = self.w2b.to(orig_weight.device, dtype=orig_weight.dtype)
output_shape = [w1a.size(0), w1b.size(1)]
if self.t1 is not None:
output_shape = [w1a.size(1), w1b.size(1)]
t1 = self.t1.to(orig_weight.device, dtype=orig_weight.dtype)
updown1 = lyco_helpers.make_weight_cp(t1, w1a, w1b)
output_shape += t1.shape[2:]
else:
if len(w1b.shape) == 4:
output_shape += w1b.shape[2:]
updown1 = lyco_helpers.rebuild_conventional(w1a, w1b, output_shape)
if self.t2 is not None:
t2 = self.t2.to(orig_weight.device, dtype=orig_weight.dtype)
updown2 = lyco_helpers.make_weight_cp(t2, w2a, w2b)
else:
updown2 = lyco_helpers.rebuild_conventional(w2a, w2b, output_shape)
updown = updown1 * updown2
return self.finalize_updown(updown, orig_weight, output_shape)

View File

@ -0,0 +1,30 @@
import network
class ModuleTypeIa3(network.ModuleType):
def create_module(self, net: network.Network, weights: network.NetworkWeights):
if all(x in weights.w for x in ["weight"]):
return NetworkModuleIa3(net, weights)
return None
class NetworkModuleIa3(network.NetworkModule):
def __init__(self, net: network.Network, weights: network.NetworkWeights):
super().__init__(net, weights)
self.w = weights.w["weight"]
self.on_input = weights.w["on_input"].item()
def calc_updown(self, orig_weight):
w = self.w.to(orig_weight.device, dtype=orig_weight.dtype)
output_shape = [w.size(0), orig_weight.size(1)]
if self.on_input:
output_shape.reverse()
else:
w = w.reshape(-1, 1)
updown = orig_weight * w
return self.finalize_updown(updown, orig_weight, output_shape)

View File

@ -0,0 +1,64 @@
import torch
import lyco_helpers
import network
class ModuleTypeLokr(network.ModuleType):
def create_module(self, net: network.Network, weights: network.NetworkWeights):
has_1 = "lokr_w1" in weights.w or ("lokr_w1_a" in weights.w and "lokr_w1_b" in weights.w)
has_2 = "lokr_w2" in weights.w or ("lokr_w2_a" in weights.w and "lokr_w2_b" in weights.w)
if has_1 and has_2:
return NetworkModuleLokr(net, weights)
return None
def make_kron(orig_shape, w1, w2):
if len(w2.shape) == 4:
w1 = w1.unsqueeze(2).unsqueeze(2)
w2 = w2.contiguous()
return torch.kron(w1, w2).reshape(orig_shape)
class NetworkModuleLokr(network.NetworkModule):
def __init__(self, net: network.Network, weights: network.NetworkWeights):
super().__init__(net, weights)
self.w1 = weights.w.get("lokr_w1")
self.w1a = weights.w.get("lokr_w1_a")
self.w1b = weights.w.get("lokr_w1_b")
self.dim = self.w1b.shape[0] if self.w1b is not None else self.dim
self.w2 = weights.w.get("lokr_w2")
self.w2a = weights.w.get("lokr_w2_a")
self.w2b = weights.w.get("lokr_w2_b")
self.dim = self.w2b.shape[0] if self.w2b is not None else self.dim
self.t2 = weights.w.get("lokr_t2")
def calc_updown(self, orig_weight):
if self.w1 is not None:
w1 = self.w1.to(orig_weight.device, dtype=orig_weight.dtype)
else:
w1a = self.w1a.to(orig_weight.device, dtype=orig_weight.dtype)
w1b = self.w1b.to(orig_weight.device, dtype=orig_weight.dtype)
w1 = w1a @ w1b
if self.w2 is not None:
w2 = self.w2.to(orig_weight.device, dtype=orig_weight.dtype)
elif self.t2 is None:
w2a = self.w2a.to(orig_weight.device, dtype=orig_weight.dtype)
w2b = self.w2b.to(orig_weight.device, dtype=orig_weight.dtype)
w2 = w2a @ w2b
else:
t2 = self.t2.to(orig_weight.device, dtype=orig_weight.dtype)
w2a = self.w2a.to(orig_weight.device, dtype=orig_weight.dtype)
w2b = self.w2b.to(orig_weight.device, dtype=orig_weight.dtype)
w2 = lyco_helpers.make_weight_cp(t2, w2a, w2b)
output_shape = [w1.size(0) * w2.size(0), w1.size(1) * w2.size(1)]
if len(orig_weight.shape) == 4:
output_shape = orig_weight.shape
updown = make_kron(output_shape, w1, w2)
return self.finalize_updown(updown, orig_weight, output_shape)

View File

@ -0,0 +1,86 @@
import torch
import lyco_helpers
import network
from modules import devices
class ModuleTypeLora(network.ModuleType):
def create_module(self, net: network.Network, weights: network.NetworkWeights):
if all(x in weights.w for x in ["lora_up.weight", "lora_down.weight"]):
return NetworkModuleLora(net, weights)
return None
class NetworkModuleLora(network.NetworkModule):
def __init__(self, net: network.Network, weights: network.NetworkWeights):
super().__init__(net, weights)
self.up_model = self.create_module(weights.w, "lora_up.weight")
self.down_model = self.create_module(weights.w, "lora_down.weight")
self.mid_model = self.create_module(weights.w, "lora_mid.weight", none_ok=True)
self.dim = weights.w["lora_down.weight"].shape[0]
def create_module(self, weights, key, none_ok=False):
weight = weights.get(key)
if weight is None and none_ok:
return None
is_linear = type(self.sd_module) in [torch.nn.Linear, torch.nn.modules.linear.NonDynamicallyQuantizableLinear, torch.nn.MultiheadAttention]
is_conv = type(self.sd_module) in [torch.nn.Conv2d]
if is_linear:
weight = weight.reshape(weight.shape[0], -1)
module = torch.nn.Linear(weight.shape[1], weight.shape[0], bias=False)
elif is_conv and key == "lora_down.weight" or key == "dyn_up":
if len(weight.shape) == 2:
weight = weight.reshape(weight.shape[0], -1, 1, 1)
if weight.shape[2] != 1 or weight.shape[3] != 1:
module = torch.nn.Conv2d(weight.shape[1], weight.shape[0], self.sd_module.kernel_size, self.sd_module.stride, self.sd_module.padding, bias=False)
else:
module = torch.nn.Conv2d(weight.shape[1], weight.shape[0], (1, 1), bias=False)
elif is_conv and key == "lora_mid.weight":
module = torch.nn.Conv2d(weight.shape[1], weight.shape[0], self.sd_module.kernel_size, self.sd_module.stride, self.sd_module.padding, bias=False)
elif is_conv and key == "lora_up.weight" or key == "dyn_down":
module = torch.nn.Conv2d(weight.shape[1], weight.shape[0], (1, 1), bias=False)
else:
raise AssertionError(f'Lora layer {self.network_key} matched a layer with unsupported type: {type(self.sd_module).__name__}')
with torch.no_grad():
if weight.shape != module.weight.shape:
weight = weight.reshape(module.weight.shape)
module.weight.copy_(weight)
module.to(device=devices.cpu, dtype=devices.dtype)
module.weight.requires_grad_(False)
return module
def calc_updown(self, orig_weight):
up = self.up_model.weight.to(orig_weight.device, dtype=orig_weight.dtype)
down = self.down_model.weight.to(orig_weight.device, dtype=orig_weight.dtype)
output_shape = [up.size(0), down.size(1)]
if self.mid_model is not None:
# cp-decomposition
mid = self.mid_model.weight.to(orig_weight.device, dtype=orig_weight.dtype)
updown = lyco_helpers.rebuild_cp_decomposition(up, down, mid)
output_shape += mid.shape[2:]
else:
if len(down.shape) == 4:
output_shape += down.shape[2:]
updown = lyco_helpers.rebuild_conventional(up, down, output_shape, self.network.dyn_dim)
return self.finalize_updown(updown, orig_weight, output_shape)
def forward(self, x, y):
self.up_model.to(device=devices.device)
self.down_model.to(device=devices.device)
return y + self.up_model(self.down_model(x)) * self.multiplier() * self.calc_scale()

View File

@ -0,0 +1,468 @@
import os
import re
import network
import network_lora
import network_hada
import network_ia3
import network_lokr
import network_full
import torch
from typing import Union
from modules import shared, devices, sd_models, errors, scripts, sd_hijack
module_types = [
network_lora.ModuleTypeLora(),
network_hada.ModuleTypeHada(),
network_ia3.ModuleTypeIa3(),
network_lokr.ModuleTypeLokr(),
network_full.ModuleTypeFull(),
]
re_digits = re.compile(r"\d+")
re_x_proj = re.compile(r"(.*)_([qkv]_proj)$")
re_compiled = {}
suffix_conversion = {
"attentions": {},
"resnets": {
"conv1": "in_layers_2",
"conv2": "out_layers_3",
"time_emb_proj": "emb_layers_1",
"conv_shortcut": "skip_connection",
}
}
def convert_diffusers_name_to_compvis(key, is_sd2):
def match(match_list, regex_text):
regex = re_compiled.get(regex_text)
if regex is None:
regex = re.compile(regex_text)
re_compiled[regex_text] = regex
r = re.match(regex, key)
if not r:
return False
match_list.clear()
match_list.extend([int(x) if re.match(re_digits, x) else x for x in r.groups()])
return True
m = []
if match(m, r"lora_unet_conv_in(.*)"):
return f'diffusion_model_input_blocks_0_0{m[0]}'
if match(m, r"lora_unet_conv_out(.*)"):
return f'diffusion_model_out_2{m[0]}'
if match(m, r"lora_unet_time_embedding_linear_(\d+)(.*)"):
return f"diffusion_model_time_embed_{m[0] * 2 - 2}{m[1]}"
if match(m, r"lora_unet_down_blocks_(\d+)_(attentions|resnets)_(\d+)_(.+)"):
suffix = suffix_conversion.get(m[1], {}).get(m[3], m[3])
return f"diffusion_model_input_blocks_{1 + m[0] * 3 + m[2]}_{1 if m[1] == 'attentions' else 0}_{suffix}"
if match(m, r"lora_unet_mid_block_(attentions|resnets)_(\d+)_(.+)"):
suffix = suffix_conversion.get(m[0], {}).get(m[2], m[2])
return f"diffusion_model_middle_block_{1 if m[0] == 'attentions' else m[1] * 2}_{suffix}"
if match(m, r"lora_unet_up_blocks_(\d+)_(attentions|resnets)_(\d+)_(.+)"):
suffix = suffix_conversion.get(m[1], {}).get(m[3], m[3])
return f"diffusion_model_output_blocks_{m[0] * 3 + m[2]}_{1 if m[1] == 'attentions' else 0}_{suffix}"
if match(m, r"lora_unet_down_blocks_(\d+)_downsamplers_0_conv"):
return f"diffusion_model_input_blocks_{3 + m[0] * 3}_0_op"
if match(m, r"lora_unet_up_blocks_(\d+)_upsamplers_0_conv"):
return f"diffusion_model_output_blocks_{2 + m[0] * 3}_{2 if m[0]>0 else 1}_conv"
if match(m, r"lora_te_text_model_encoder_layers_(\d+)_(.+)"):
if is_sd2:
if 'mlp_fc1' in m[1]:
return f"model_transformer_resblocks_{m[0]}_{m[1].replace('mlp_fc1', 'mlp_c_fc')}"
elif 'mlp_fc2' in m[1]:
return f"model_transformer_resblocks_{m[0]}_{m[1].replace('mlp_fc2', 'mlp_c_proj')}"
else:
return f"model_transformer_resblocks_{m[0]}_{m[1].replace('self_attn', 'attn')}"
return f"transformer_text_model_encoder_layers_{m[0]}_{m[1]}"
if match(m, r"lora_te2_text_model_encoder_layers_(\d+)_(.+)"):
if 'mlp_fc1' in m[1]:
return f"1_model_transformer_resblocks_{m[0]}_{m[1].replace('mlp_fc1', 'mlp_c_fc')}"
elif 'mlp_fc2' in m[1]:
return f"1_model_transformer_resblocks_{m[0]}_{m[1].replace('mlp_fc2', 'mlp_c_proj')}"
else:
return f"1_model_transformer_resblocks_{m[0]}_{m[1].replace('self_attn', 'attn')}"
return key
def assign_network_names_to_compvis_modules(sd_model):
network_layer_mapping = {}
if shared.sd_model.is_sdxl:
for i, embedder in enumerate(shared.sd_model.conditioner.embedders):
if not hasattr(embedder, 'wrapped'):
continue
for name, module in embedder.wrapped.named_modules():
network_name = f'{i}_{name.replace(".", "_")}'
network_layer_mapping[network_name] = module
module.network_layer_name = network_name
else:
for name, module in shared.sd_model.cond_stage_model.wrapped.named_modules():
network_name = name.replace(".", "_")
network_layer_mapping[network_name] = module
module.network_layer_name = network_name
for name, module in shared.sd_model.model.named_modules():
network_name = name.replace(".", "_")
network_layer_mapping[network_name] = module
module.network_layer_name = network_name
sd_model.network_layer_mapping = network_layer_mapping
def load_network(name, network_on_disk):
net = network.Network(name, network_on_disk)
net.mtime = os.path.getmtime(network_on_disk.filename)
sd = sd_models.read_state_dict(network_on_disk.filename)
# this should not be needed but is here as an emergency fix for an unknown error people are experiencing in 1.2.0
if not hasattr(shared.sd_model, 'network_layer_mapping'):
assign_network_names_to_compvis_modules(shared.sd_model)
keys_failed_to_match = {}
is_sd2 = 'model_transformer_resblocks' in shared.sd_model.network_layer_mapping
matched_networks = {}
for key_network, weight in sd.items():
key_network_without_network_parts, network_part = key_network.split(".", 1)
key = convert_diffusers_name_to_compvis(key_network_without_network_parts, is_sd2)
sd_module = shared.sd_model.network_layer_mapping.get(key, None)
if sd_module is None:
m = re_x_proj.match(key)
if m:
sd_module = shared.sd_model.network_layer_mapping.get(m.group(1), None)
# SDXL loras seem to already have correct compvis keys, so only need to replace "lora_unet" with "diffusion_model"
if sd_module is None and "lora_unet" in key_network_without_network_parts:
key = key_network_without_network_parts.replace("lora_unet", "diffusion_model")
sd_module = shared.sd_model.network_layer_mapping.get(key, None)
elif sd_module is None and "lora_te1_text_model" in key_network_without_network_parts:
key = key_network_without_network_parts.replace("lora_te1_text_model", "0_transformer_text_model")
sd_module = shared.sd_model.network_layer_mapping.get(key, None)
# some SD1 Loras also have correct compvis keys
if sd_module is None:
key = key_network_without_network_parts.replace("lora_te1_text_model", "transformer_text_model")
sd_module = shared.sd_model.network_layer_mapping.get(key, None)
if sd_module is None:
keys_failed_to_match[key_network] = key
continue
if key not in matched_networks:
matched_networks[key] = network.NetworkWeights(network_key=key_network, sd_key=key, w={}, sd_module=sd_module)
matched_networks[key].w[network_part] = weight
for key, weights in matched_networks.items():
net_module = None
for nettype in module_types:
net_module = nettype.create_module(net, weights)
if net_module is not None:
break
if net_module is None:
raise AssertionError(f"Could not find a module type (out of {', '.join([x.__class__.__name__ for x in module_types])}) that would accept those keys: {', '.join(weights.w)}")
net.modules[key] = net_module
if keys_failed_to_match:
print(f"Failed to match keys when loading network {network_on_disk.filename}: {keys_failed_to_match}")
return net
def load_networks(names, te_multipliers=None, unet_multipliers=None, dyn_dims=None):
already_loaded = {}
for net in loaded_networks:
if net.name in names:
already_loaded[net.name] = net
loaded_networks.clear()
networks_on_disk = [available_network_aliases.get(name, None) for name in names]
if any(x is None for x in networks_on_disk):
list_available_networks()
networks_on_disk = [available_network_aliases.get(name, None) for name in names]
failed_to_load_networks = []
for i, name in enumerate(names):
net = already_loaded.get(name, None)
network_on_disk = networks_on_disk[i]
if network_on_disk is not None:
if net is None or os.path.getmtime(network_on_disk.filename) > net.mtime:
try:
net = load_network(name, network_on_disk)
except Exception as e:
errors.display(e, f"loading network {network_on_disk.filename}")
continue
net.mentioned_name = name
network_on_disk.read_hash()
if net is None:
failed_to_load_networks.append(name)
print(f"Couldn't find network with name {name}")
continue
net.te_multiplier = te_multipliers[i] if te_multipliers else 1.0
net.unet_multiplier = unet_multipliers[i] if unet_multipliers else 1.0
net.dyn_dim = dyn_dims[i] if dyn_dims else 1.0
loaded_networks.append(net)
if failed_to_load_networks:
sd_hijack.model_hijack.comments.append("Failed to find networks: " + ", ".join(failed_to_load_networks))
def network_restore_weights_from_backup(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn.MultiheadAttention]):
weights_backup = getattr(self, "network_weights_backup", None)
if weights_backup is None:
return
if isinstance(self, torch.nn.MultiheadAttention):
self.in_proj_weight.copy_(weights_backup[0])
self.out_proj.weight.copy_(weights_backup[1])
else:
self.weight.copy_(weights_backup)
def network_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn.MultiheadAttention]):
"""
Applies the currently selected set of networks to the weights of torch layer self.
If weights already have this particular set of networks applied, does nothing.
If not, restores orginal weights from backup and alters weights according to networks.
"""
network_layer_name = getattr(self, 'network_layer_name', None)
if network_layer_name is None:
return
current_names = getattr(self, "network_current_names", ())
wanted_names = tuple((x.name, x.te_multiplier, x.unet_multiplier, x.dyn_dim) for x in loaded_networks)
weights_backup = getattr(self, "network_weights_backup", None)
if weights_backup is None:
if isinstance(self, torch.nn.MultiheadAttention):
weights_backup = (self.in_proj_weight.to(devices.cpu, copy=True), self.out_proj.weight.to(devices.cpu, copy=True))
else:
weights_backup = self.weight.to(devices.cpu, copy=True)
self.network_weights_backup = weights_backup
if current_names != wanted_names:
network_restore_weights_from_backup(self)
for net in loaded_networks:
module = net.modules.get(network_layer_name, None)
if module is not None and hasattr(self, 'weight'):
with torch.no_grad():
updown = module.calc_updown(self.weight)
if len(self.weight.shape) == 4 and self.weight.shape[1] == 9:
# inpainting model. zero pad updown to make channel[1] 4 to 9
updown = torch.nn.functional.pad(updown, (0, 0, 0, 0, 0, 5))
self.weight += updown
continue
module_q = net.modules.get(network_layer_name + "_q_proj", None)
module_k = net.modules.get(network_layer_name + "_k_proj", None)
module_v = net.modules.get(network_layer_name + "_v_proj", None)
module_out = net.modules.get(network_layer_name + "_out_proj", None)
if isinstance(self, torch.nn.MultiheadAttention) and module_q and module_k and module_v and module_out:
with torch.no_grad():
updown_q = module_q.calc_updown(self.in_proj_weight)
updown_k = module_k.calc_updown(self.in_proj_weight)
updown_v = module_v.calc_updown(self.in_proj_weight)
updown_qkv = torch.vstack([updown_q, updown_k, updown_v])
updown_out = module_out.calc_updown(self.out_proj.weight)
self.in_proj_weight += updown_qkv
self.out_proj.weight += updown_out
continue
if module is None:
continue
print(f'failed to calculate network weights for layer {network_layer_name}')
self.network_current_names = wanted_names
def network_forward(module, input, original_forward):
"""
Old way of applying Lora by executing operations during layer's forward.
Stacking many loras this way results in big performance degradation.
"""
if len(loaded_networks) == 0:
return original_forward(module, input)
input = devices.cond_cast_unet(input)
network_restore_weights_from_backup(module)
network_reset_cached_weight(module)
y = original_forward(module, input)
network_layer_name = getattr(module, 'network_layer_name', None)
for lora in loaded_networks:
module = lora.modules.get(network_layer_name, None)
if module is None:
continue
y = module.forward(y, input)
return y
def network_reset_cached_weight(self: Union[torch.nn.Conv2d, torch.nn.Linear]):
self.network_current_names = ()
self.network_weights_backup = None
def network_Linear_forward(self, input):
if shared.opts.lora_functional:
return network_forward(self, input, torch.nn.Linear_forward_before_network)
network_apply_weights(self)
return torch.nn.Linear_forward_before_network(self, input)
def network_Linear_load_state_dict(self, *args, **kwargs):
network_reset_cached_weight(self)
return torch.nn.Linear_load_state_dict_before_network(self, *args, **kwargs)
def network_Conv2d_forward(self, input):
if shared.opts.lora_functional:
return network_forward(self, input, torch.nn.Conv2d_forward_before_network)
network_apply_weights(self)
return torch.nn.Conv2d_forward_before_network(self, input)
def network_Conv2d_load_state_dict(self, *args, **kwargs):
network_reset_cached_weight(self)
return torch.nn.Conv2d_load_state_dict_before_network(self, *args, **kwargs)
def network_MultiheadAttention_forward(self, *args, **kwargs):
network_apply_weights(self)
return torch.nn.MultiheadAttention_forward_before_network(self, *args, **kwargs)
def network_MultiheadAttention_load_state_dict(self, *args, **kwargs):
network_reset_cached_weight(self)
return torch.nn.MultiheadAttention_load_state_dict_before_network(self, *args, **kwargs)
def list_available_networks():
available_networks.clear()
available_network_aliases.clear()
forbidden_network_aliases.clear()
available_network_hash_lookup.clear()
forbidden_network_aliases.update({"none": 1, "Addams": 1})
os.makedirs(shared.cmd_opts.lora_dir, exist_ok=True)
candidates = list(shared.walk_files(shared.cmd_opts.lora_dir, allowed_extensions=[".pt", ".ckpt", ".safetensors"]))
candidates += list(shared.walk_files(shared.cmd_opts.lyco_dir_backcompat, allowed_extensions=[".pt", ".ckpt", ".safetensors"]))
for filename in candidates:
if os.path.isdir(filename):
continue
name = os.path.splitext(os.path.basename(filename))[0]
try:
entry = network.NetworkOnDisk(name, filename)
except OSError: # should catch FileNotFoundError and PermissionError etc.
errors.report(f"Failed to load network {name} from {filename}", exc_info=True)
continue
available_networks[name] = entry
if entry.alias in available_network_aliases:
forbidden_network_aliases[entry.alias.lower()] = 1
available_network_aliases[name] = entry
available_network_aliases[entry.alias] = entry
re_network_name = re.compile(r"(.*)\s*\([0-9a-fA-F]+\)")
def infotext_pasted(infotext, params):
if "AddNet Module 1" in [x[1] for x in scripts.scripts_txt2img.infotext_fields]:
return # if the other extension is active, it will handle those fields, no need to do anything
added = []
for k in params:
if not k.startswith("AddNet Model "):
continue
num = k[13:]
if params.get("AddNet Module " + num) != "LoRA":
continue
name = params.get("AddNet Model " + num)
if name is None:
continue
m = re_network_name.match(name)
if m:
name = m.group(1)
multiplier = params.get("AddNet Weight A " + num, "1.0")
added.append(f"<lora:{name}:{multiplier}>")
if added:
params["Prompt"] += "\n" + "".join(added)
available_networks = {}
available_network_aliases = {}
loaded_networks = []
available_network_hash_lookup = {}
forbidden_network_aliases = {}
list_available_networks()

View File

@ -4,3 +4,4 @@ from modules import paths
def preload(parser): def preload(parser):
parser.add_argument("--lora-dir", type=str, help="Path to directory with Lora networks.", default=os.path.join(paths.models_path, 'Lora')) parser.add_argument("--lora-dir", type=str, help="Path to directory with Lora networks.", default=os.path.join(paths.models_path, 'Lora'))
parser.add_argument("--lyco-dir-backcompat", type=str, help="Path to directory with LyCORIS networks (for backawards compatibility; can also use --lyco-dir).", default=os.path.join(paths.models_path, 'LyCORIS'))

View File

@ -1,56 +1,123 @@
import re
import torch import torch
import gradio as gr import gradio as gr
from fastapi import FastAPI
import lora import network
import networks
import lora # noqa:F401
import extra_networks_lora import extra_networks_lora
import ui_extra_networks_lora import ui_extra_networks_lora
from modules import script_callbacks, ui_extra_networks, extra_networks, shared from modules import script_callbacks, ui_extra_networks, extra_networks, shared
def unload(): def unload():
torch.nn.Linear.forward = torch.nn.Linear_forward_before_lora torch.nn.Linear.forward = torch.nn.Linear_forward_before_network
torch.nn.Linear._load_from_state_dict = torch.nn.Linear_load_state_dict_before_lora torch.nn.Linear._load_from_state_dict = torch.nn.Linear_load_state_dict_before_network
torch.nn.Conv2d.forward = torch.nn.Conv2d_forward_before_lora torch.nn.Conv2d.forward = torch.nn.Conv2d_forward_before_network
torch.nn.Conv2d._load_from_state_dict = torch.nn.Conv2d_load_state_dict_before_lora torch.nn.Conv2d._load_from_state_dict = torch.nn.Conv2d_load_state_dict_before_network
torch.nn.MultiheadAttention.forward = torch.nn.MultiheadAttention_forward_before_lora torch.nn.MultiheadAttention.forward = torch.nn.MultiheadAttention_forward_before_network
torch.nn.MultiheadAttention._load_from_state_dict = torch.nn.MultiheadAttention_load_state_dict_before_lora torch.nn.MultiheadAttention._load_from_state_dict = torch.nn.MultiheadAttention_load_state_dict_before_network
def before_ui(): def before_ui():
ui_extra_networks.register_page(ui_extra_networks_lora.ExtraNetworksPageLora()) ui_extra_networks.register_page(ui_extra_networks_lora.ExtraNetworksPageLora())
extra_networks.register_extra_network(extra_networks_lora.ExtraNetworkLora())
extra_network = extra_networks_lora.ExtraNetworkLora()
extra_networks.register_extra_network(extra_network)
extra_networks.register_extra_network_alias(extra_network, "lyco")
if not hasattr(torch.nn, 'Linear_forward_before_lora'): if not hasattr(torch.nn, 'Linear_forward_before_network'):
torch.nn.Linear_forward_before_lora = torch.nn.Linear.forward torch.nn.Linear_forward_before_network = torch.nn.Linear.forward
if not hasattr(torch.nn, 'Linear_load_state_dict_before_lora'): if not hasattr(torch.nn, 'Linear_load_state_dict_before_network'):
torch.nn.Linear_load_state_dict_before_lora = torch.nn.Linear._load_from_state_dict torch.nn.Linear_load_state_dict_before_network = torch.nn.Linear._load_from_state_dict
if not hasattr(torch.nn, 'Conv2d_forward_before_lora'): if not hasattr(torch.nn, 'Conv2d_forward_before_network'):
torch.nn.Conv2d_forward_before_lora = torch.nn.Conv2d.forward torch.nn.Conv2d_forward_before_network = torch.nn.Conv2d.forward
if not hasattr(torch.nn, 'Conv2d_load_state_dict_before_lora'): if not hasattr(torch.nn, 'Conv2d_load_state_dict_before_network'):
torch.nn.Conv2d_load_state_dict_before_lora = torch.nn.Conv2d._load_from_state_dict torch.nn.Conv2d_load_state_dict_before_network = torch.nn.Conv2d._load_from_state_dict
if not hasattr(torch.nn, 'MultiheadAttention_forward_before_lora'): if not hasattr(torch.nn, 'MultiheadAttention_forward_before_network'):
torch.nn.MultiheadAttention_forward_before_lora = torch.nn.MultiheadAttention.forward torch.nn.MultiheadAttention_forward_before_network = torch.nn.MultiheadAttention.forward
if not hasattr(torch.nn, 'MultiheadAttention_load_state_dict_before_lora'): if not hasattr(torch.nn, 'MultiheadAttention_load_state_dict_before_network'):
torch.nn.MultiheadAttention_load_state_dict_before_lora = torch.nn.MultiheadAttention._load_from_state_dict torch.nn.MultiheadAttention_load_state_dict_before_network = torch.nn.MultiheadAttention._load_from_state_dict
torch.nn.Linear.forward = lora.lora_Linear_forward torch.nn.Linear.forward = networks.network_Linear_forward
torch.nn.Linear._load_from_state_dict = lora.lora_Linear_load_state_dict torch.nn.Linear._load_from_state_dict = networks.network_Linear_load_state_dict
torch.nn.Conv2d.forward = lora.lora_Conv2d_forward torch.nn.Conv2d.forward = networks.network_Conv2d_forward
torch.nn.Conv2d._load_from_state_dict = lora.lora_Conv2d_load_state_dict torch.nn.Conv2d._load_from_state_dict = networks.network_Conv2d_load_state_dict
torch.nn.MultiheadAttention.forward = lora.lora_MultiheadAttention_forward torch.nn.MultiheadAttention.forward = networks.network_MultiheadAttention_forward
torch.nn.MultiheadAttention._load_from_state_dict = lora.lora_MultiheadAttention_load_state_dict torch.nn.MultiheadAttention._load_from_state_dict = networks.network_MultiheadAttention_load_state_dict
script_callbacks.on_model_loaded(lora.assign_lora_names_to_compvis_modules) script_callbacks.on_model_loaded(networks.assign_network_names_to_compvis_modules)
script_callbacks.on_script_unloaded(unload) script_callbacks.on_script_unloaded(unload)
script_callbacks.on_before_ui(before_ui) script_callbacks.on_before_ui(before_ui)
script_callbacks.on_infotext_pasted(networks.infotext_pasted)
shared.options_templates.update(shared.options_section(('extra_networks', "Extra Networks"), { shared.options_templates.update(shared.options_section(('extra_networks', "Extra Networks"), {
"sd_lora": shared.OptionInfo("None", "Add Lora to prompt", gr.Dropdown, lambda: {"choices": [""] + [x for x in lora.available_loras]}, refresh=lora.list_available_loras), "sd_lora": shared.OptionInfo("None", "Add network to prompt", gr.Dropdown, lambda: {"choices": ["None", *networks.available_networks]}, refresh=networks.list_available_networks),
"lora_preferred_name": shared.OptionInfo("Alias from file", "When adding to prompt, refer to Lora by", gr.Radio, {"choices": ["Alias from file", "Filename"]}),
"lora_add_hashes_to_infotext": shared.OptionInfo(True, "Add Lora hashes to infotext"),
"lora_show_all": shared.OptionInfo(False, "Always show all networks on the Lora page").info("otherwise, those detected as for incompatible version of Stable Diffusion will be hidden"),
"lora_hide_unknown_for_versions": shared.OptionInfo([], "Hide networks of unknown versions for model versions", gr.CheckboxGroup, {"choices": ["SD1", "SD2", "SDXL"]}),
})) }))
shared.options_templates.update(shared.options_section(('compatibility', "Compatibility"), {
"lora_functional": shared.OptionInfo(False, "Lora/Networks: use old method that takes longer when you have multiple Loras active and produces same results as kohya-ss/sd-webui-additional-networks extension"),
}))
def create_lora_json(obj: network.NetworkOnDisk):
return {
"name": obj.name,
"alias": obj.alias,
"path": obj.filename,
"metadata": obj.metadata,
}
def api_networks(_: gr.Blocks, app: FastAPI):
@app.get("/sdapi/v1/loras")
async def get_loras():
return [create_lora_json(obj) for obj in networks.available_networks.values()]
@app.post("/sdapi/v1/refresh-loras")
async def refresh_loras():
return networks.list_available_networks()
script_callbacks.on_app_started(api_networks)
re_lora = re.compile("<lora:([^:]+):")
def infotext_pasted(infotext, d):
hashes = d.get("Lora hashes")
if not hashes:
return
hashes = [x.strip().split(':', 1) for x in hashes.split(",")]
hashes = {x[0].strip().replace(",", ""): x[1].strip() for x in hashes}
def network_replacement(m):
alias = m.group(1)
shorthash = hashes.get(alias)
if shorthash is None:
return m.group(0)
network_on_disk = networks.available_network_hash_lookup.get(shorthash)
if network_on_disk is None:
return m.group(0)
return f'<lora:{network_on_disk.get_alias()}:'
d["Prompt"] = re.sub(re_lora, network_replacement, d["Prompt"])
script_callbacks.on_infotext_pasted(infotext_pasted)

View File

@ -0,0 +1,216 @@
import datetime
import html
import random
import gradio as gr
import re
from modules import ui_extra_networks_user_metadata
def is_non_comma_tagset(tags):
average_tag_length = sum(len(x) for x in tags.keys()) / len(tags)
return average_tag_length >= 16
re_word = re.compile(r"[-_\w']+")
re_comma = re.compile(r" *, *")
def build_tags(metadata):
tags = {}
for _, tags_dict in metadata.get("ss_tag_frequency", {}).items():
for tag, tag_count in tags_dict.items():
tag = tag.strip()
tags[tag] = tags.get(tag, 0) + int(tag_count)
if tags and is_non_comma_tagset(tags):
new_tags = {}
for text, text_count in tags.items():
for word in re.findall(re_word, text):
if len(word) < 3:
continue
new_tags[word] = new_tags.get(word, 0) + text_count
tags = new_tags
ordered_tags = sorted(tags.keys(), key=tags.get, reverse=True)
return [(tag, tags[tag]) for tag in ordered_tags]
class LoraUserMetadataEditor(ui_extra_networks_user_metadata.UserMetadataEditor):
def __init__(self, ui, tabname, page):
super().__init__(ui, tabname, page)
self.select_sd_version = None
self.taginfo = None
self.edit_activation_text = None
self.slider_preferred_weight = None
self.edit_notes = None
def save_lora_user_metadata(self, name, desc, sd_version, activation_text, preferred_weight, notes):
user_metadata = self.get_user_metadata(name)
user_metadata["description"] = desc
user_metadata["sd version"] = sd_version
user_metadata["activation text"] = activation_text
user_metadata["preferred weight"] = preferred_weight
user_metadata["notes"] = notes
self.write_user_metadata(name, user_metadata)
def get_metadata_table(self, name):
table = super().get_metadata_table(name)
item = self.page.items.get(name, {})
metadata = item.get("metadata") or {}
keys = {
'ss_sd_model_name': "Model:",
'ss_clip_skip': "Clip skip:",
'ss_network_module': "Kohya module:",
}
for key, label in keys.items():
value = metadata.get(key, None)
if value is not None and str(value) != "None":
table.append((label, html.escape(value)))
ss_training_started_at = metadata.get('ss_training_started_at')
if ss_training_started_at:
table.append(("Date trained:", datetime.datetime.utcfromtimestamp(float(ss_training_started_at)).strftime('%Y-%m-%d %H:%M')))
ss_bucket_info = metadata.get("ss_bucket_info")
if ss_bucket_info and "buckets" in ss_bucket_info:
resolutions = {}
for _, bucket in ss_bucket_info["buckets"].items():
resolution = bucket["resolution"]
resolution = f'{resolution[1]}x{resolution[0]}'
resolutions[resolution] = resolutions.get(resolution, 0) + int(bucket["count"])
resolutions_list = sorted(resolutions.keys(), key=resolutions.get, reverse=True)
resolutions_text = html.escape(", ".join(resolutions_list[0:4]))
if len(resolutions) > 4:
resolutions_text += ", ..."
resolutions_text = f"<span title='{html.escape(', '.join(resolutions_list))}'>{resolutions_text}</span>"
table.append(('Resolutions:' if len(resolutions_list) > 1 else 'Resolution:', resolutions_text))
image_count = 0
for _, params in metadata.get("ss_dataset_dirs", {}).items():
image_count += int(params.get("img_count", 0))
if image_count:
table.append(("Dataset size:", image_count))
return table
def put_values_into_components(self, name):
user_metadata = self.get_user_metadata(name)
values = super().put_values_into_components(name)
item = self.page.items.get(name, {})
metadata = item.get("metadata") or {}
tags = build_tags(metadata)
gradio_tags = [(tag, str(count)) for tag, count in tags[0:24]]
return [
*values[0:5],
item.get("sd_version", "Unknown"),
gr.HighlightedText.update(value=gradio_tags, visible=True if tags else False),
user_metadata.get('activation text', ''),
float(user_metadata.get('preferred weight', 0.0)),
gr.update(visible=True if tags else False),
gr.update(value=self.generate_random_prompt_from_tags(tags), visible=True if tags else False),
]
def generate_random_prompt(self, name):
item = self.page.items.get(name, {})
metadata = item.get("metadata") or {}
tags = build_tags(metadata)
return self.generate_random_prompt_from_tags(tags)
def generate_random_prompt_from_tags(self, tags):
max_count = None
res = []
for tag, count in tags:
if not max_count:
max_count = count
v = random.random() * max_count
if count > v:
res.append(tag)
return ", ".join(sorted(res))
def create_extra_default_items_in_left_column(self):
# this would be a lot better as gr.Radio but I can't make it work
self.select_sd_version = gr.Dropdown(['SD1', 'SD2', 'SDXL', 'Unknown'], value='Unknown', label='Stable Diffusion version', interactive=True)
def create_editor(self):
self.create_default_editor_elems()
self.taginfo = gr.HighlightedText(label="Training dataset tags")
self.edit_activation_text = gr.Text(label='Activation text', info="Will be added to prompt along with Lora")
self.slider_preferred_weight = gr.Slider(label='Preferred weight', info="Set to 0 to disable", minimum=0.0, maximum=2.0, step=0.01)
with gr.Row() as row_random_prompt:
with gr.Column(scale=8):
random_prompt = gr.Textbox(label='Random prompt', lines=4, max_lines=4, interactive=False)
with gr.Column(scale=1, min_width=120):
generate_random_prompt = gr.Button('Generate').style(full_width=True, size="lg")
self.edit_notes = gr.TextArea(label='Notes', lines=4)
generate_random_prompt.click(fn=self.generate_random_prompt, inputs=[self.edit_name_input], outputs=[random_prompt], show_progress=False)
def select_tag(activation_text, evt: gr.SelectData):
tag = evt.value[0]
words = re.split(re_comma, activation_text)
if tag in words:
words = [x for x in words if x != tag and x.strip()]
return ", ".join(words)
return activation_text + ", " + tag if activation_text else tag
self.taginfo.select(fn=select_tag, inputs=[self.edit_activation_text], outputs=[self.edit_activation_text], show_progress=False)
self.create_default_buttons()
viewed_components = [
self.edit_name,
self.edit_description,
self.html_filedata,
self.html_preview,
self.edit_notes,
self.select_sd_version,
self.taginfo,
self.edit_activation_text,
self.slider_preferred_weight,
row_random_prompt,
random_prompt,
]
self.button_edit\
.click(fn=self.put_values_into_components, inputs=[self.edit_name_input], outputs=viewed_components)\
.then(fn=lambda: gr.update(visible=True), inputs=[], outputs=[self.box])
edited_components = [
self.edit_description,
self.select_sd_version,
self.edit_activation_text,
self.slider_preferred_weight,
self.edit_notes,
]
self.setup_save_handler(self.button_save, self.save_lora_user_metadata, edited_components)

View File

@ -1,8 +1,11 @@
import json
import os import os
import lora
import network
import networks
from modules import shared, ui_extra_networks from modules import shared, ui_extra_networks
from modules.ui_extra_networks import quote_js
from ui_edit_user_metadata import LoraUserMetadataEditor
class ExtraNetworksPageLora(ui_extra_networks.ExtraNetworksPage): class ExtraNetworksPageLora(ui_extra_networks.ExtraNetworksPage):
@ -10,22 +13,66 @@ class ExtraNetworksPageLora(ui_extra_networks.ExtraNetworksPage):
super().__init__('Lora') super().__init__('Lora')
def refresh(self): def refresh(self):
lora.list_available_loras() networks.list_available_networks()
def create_item(self, name, index=None, enable_filter=True):
lora_on_disk = networks.available_networks.get(name)
def list_items(self):
for name, lora_on_disk in lora.available_loras.items():
path, ext = os.path.splitext(lora_on_disk.filename) path, ext = os.path.splitext(lora_on_disk.filename)
yield {
alias = lora_on_disk.get_alias()
item = {
"name": name, "name": name,
"filename": path, "filename": lora_on_disk.filename,
"preview": self.find_preview(path), "preview": self.find_preview(path),
"description": self.find_description(path), "description": self.find_description(path),
"search_term": self.search_terms_from_path(lora_on_disk.filename), "search_term": self.search_terms_from_path(lora_on_disk.filename),
"prompt": json.dumps(f"<lora:{name}:") + " + opts.extra_networks_default_multiplier + " + json.dumps(">"),
"local_preview": f"{path}.{shared.opts.samples_format}", "local_preview": f"{path}.{shared.opts.samples_format}",
"metadata": json.dumps(lora_on_disk.metadata, indent=4) if lora_on_disk.metadata else None, "metadata": lora_on_disk.metadata,
"sort_keys": {'default': index, **self.get_sort_keys(lora_on_disk.filename)},
"sd_version": lora_on_disk.sd_version.name,
} }
def allowed_directories_for_previews(self): self.read_user_metadata(item)
return [shared.cmd_opts.lora_dir] activation_text = item["user_metadata"].get("activation text")
preferred_weight = item["user_metadata"].get("preferred weight", 0.0)
item["prompt"] = quote_js(f"<lora:{alias}:") + " + " + (str(preferred_weight) if preferred_weight else "opts.extra_networks_default_multiplier") + " + " + quote_js(">")
if activation_text:
item["prompt"] += " + " + quote_js(" " + activation_text)
sd_version = item["user_metadata"].get("sd version")
if sd_version in network.SdVersion.__members__:
item["sd_version"] = sd_version
sd_version = network.SdVersion[sd_version]
else:
sd_version = lora_on_disk.sd_version
if shared.opts.lora_show_all or not enable_filter:
pass
elif sd_version == network.SdVersion.Unknown:
model_version = network.SdVersion.SDXL if shared.sd_model.is_sdxl else network.SdVersion.SD2 if shared.sd_model.is_sd2 else network.SdVersion.SD1
if model_version.name in shared.opts.lora_hide_unknown_for_versions:
return None
elif shared.sd_model.is_sdxl and sd_version != network.SdVersion.SDXL:
return None
elif shared.sd_model.is_sd2 and sd_version != network.SdVersion.SD2:
return None
elif shared.sd_model.is_sd1 and sd_version != network.SdVersion.SD1:
return None
return item
def list_items(self):
for index, name in enumerate(networks.available_networks):
item = self.create_item(name, index)
if item is not None:
yield item
def allowed_directories_for_previews(self):
return [shared.cmd_opts.lora_dir, shared.cmd_opts.lyco_dir_backcompat]
def create_user_metadata_editor(self, ui, tabname):
return LoraUserMetadataEditor(ui, tabname, self)

View File

@ -1,15 +1,16 @@
import os.path
import sys import sys
import traceback
import PIL.Image import PIL.Image
import numpy as np import numpy as np
import torch import torch
from basicsr.utils.download_util import load_file_from_url from tqdm import tqdm
import modules.upscaler import modules.upscaler
from modules import devices, modelloader from modules import devices, modelloader, script_callbacks, errors
from scunet_model_arch import SCUNet as net from scunet_model_arch import SCUNet
from modules.modelloader import load_file_from_url
from modules.shared import opts
class UpscalerScuNET(modules.upscaler.Upscaler): class UpscalerScuNET(modules.upscaler.Upscaler):
@ -17,15 +18,15 @@ class UpscalerScuNET(modules.upscaler.Upscaler):
self.name = "ScuNET" self.name = "ScuNET"
self.model_name = "ScuNET GAN" self.model_name = "ScuNET GAN"
self.model_name2 = "ScuNET PSNR" self.model_name2 = "ScuNET PSNR"
self.model_url = "https://github.com/cszn/KAIR/releases/download/v1.0/scunet_color_real_gan.pth" self.model_url = "https://ghproxy.com/https://github.com/cszn/KAIR/releases/download/v1.0/scunet_color_real_gan.pth"
self.model_url2 = "https://github.com/cszn/KAIR/releases/download/v1.0/scunet_color_real_psnr.pth" self.model_url2 = "https://ghproxy.com/https://github.com/cszn/KAIR/releases/download/v1.0/scunet_color_real_psnr.pth"
self.user_path = dirname self.user_path = dirname
super().__init__() super().__init__()
model_paths = self.find_models(ext_filter=[".pth"]) model_paths = self.find_models(ext_filter=[".pth"])
scalers = [] scalers = []
add_model2 = True add_model2 = True
for file in model_paths: for file in model_paths:
if "http" in file: if file.startswith("http"):
name = self.model_name name = self.model_name
else: else:
name = modelloader.friendly_name(file) name = modelloader.friendly_name(file)
@ -35,53 +36,109 @@ class UpscalerScuNET(modules.upscaler.Upscaler):
scaler_data = modules.upscaler.UpscalerData(name, file, self, 4) scaler_data = modules.upscaler.UpscalerData(name, file, self, 4)
scalers.append(scaler_data) scalers.append(scaler_data)
except Exception: except Exception:
print(f"Error loading ScuNET model: {file}", file=sys.stderr) errors.report(f"Error loading ScuNET model: {file}", exc_info=True)
print(traceback.format_exc(), file=sys.stderr)
if add_model2: if add_model2:
scaler_data2 = modules.upscaler.UpscalerData(self.model_name2, self.model_url2, self) scaler_data2 = modules.upscaler.UpscalerData(self.model_name2, self.model_url2, self)
scalers.append(scaler_data2) scalers.append(scaler_data2)
self.scalers = scalers self.scalers = scalers
def do_upscale(self, img: PIL.Image, selected_file): @staticmethod
torch.cuda.empty_cache() @torch.no_grad()
def tiled_inference(img, model):
# test the image tile by tile
h, w = img.shape[2:]
tile = opts.SCUNET_tile
tile_overlap = opts.SCUNET_tile_overlap
if tile == 0:
return model(img)
device = devices.get_device_for('scunet')
assert tile % 8 == 0, "tile size should be a multiple of window_size"
sf = 1
stride = tile - tile_overlap
h_idx_list = list(range(0, h - tile, stride)) + [h - tile]
w_idx_list = list(range(0, w - tile, stride)) + [w - tile]
E = torch.zeros(1, 3, h * sf, w * sf, dtype=img.dtype, device=device)
W = torch.zeros_like(E, dtype=devices.dtype, device=device)
with tqdm(total=len(h_idx_list) * len(w_idx_list), desc="ScuNET tiles") as pbar:
for h_idx in h_idx_list:
for w_idx in w_idx_list:
in_patch = img[..., h_idx: h_idx + tile, w_idx: w_idx + tile]
out_patch = model(in_patch)
out_patch_mask = torch.ones_like(out_patch)
E[
..., h_idx * sf: (h_idx + tile) * sf, w_idx * sf: (w_idx + tile) * sf
].add_(out_patch)
W[
..., h_idx * sf: (h_idx + tile) * sf, w_idx * sf: (w_idx + tile) * sf
].add_(out_patch_mask)
pbar.update(1)
output = E.div_(W)
return output
def do_upscale(self, img: PIL.Image.Image, selected_file):
devices.torch_gc()
try:
model = self.load_model(selected_file) model = self.load_model(selected_file)
if model is None: except Exception as e:
print(f"ScuNET: Unable to load model from {selected_file}: {e}", file=sys.stderr)
return img return img
device = devices.get_device_for('scunet') device = devices.get_device_for('scunet')
img = np.array(img) tile = opts.SCUNET_tile
img = img[:, :, ::-1] h, w = img.height, img.width
img = np.moveaxis(img, 2, 0) / 255 np_img = np.array(img)
img = torch.from_numpy(img).float() np_img = np_img[:, :, ::-1] # RGB to BGR
img = img.unsqueeze(0).to(device) np_img = np_img.transpose((2, 0, 1)) / 255 # HWC to CHW
torch_img = torch.from_numpy(np_img).float().unsqueeze(0).to(device) # type: ignore
with torch.no_grad(): if tile > h or tile > w:
output = model(img) _img = torch.zeros(1, 3, max(h, tile), max(w, tile), dtype=torch_img.dtype, device=torch_img.device)
output = output.squeeze().float().cpu().clamp_(0, 1).numpy() _img[:, :, :h, :w] = torch_img # pad image
output = 255. * np.moveaxis(output, 0, 2) torch_img = _img
output = output.astype(np.uint8)
output = output[:, :, ::-1] torch_output = self.tiled_inference(torch_img, model).squeeze(0)
torch.cuda.empty_cache() torch_output = torch_output[:, :h * 1, :w * 1] # remove padding, if any
return PIL.Image.fromarray(output, 'RGB') np_output: np.ndarray = torch_output.float().cpu().clamp_(0, 1).numpy()
del torch_img, torch_output
devices.torch_gc()
output = np_output.transpose((1, 2, 0)) # CHW to HWC
output = output[:, :, ::-1] # BGR to RGB
return PIL.Image.fromarray((output * 255).astype(np.uint8))
def load_model(self, path: str): def load_model(self, path: str):
device = devices.get_device_for('scunet') device = devices.get_device_for('scunet')
if "http" in path: if path.startswith("http"):
filename = load_file_from_url(url=self.model_url, model_dir=self.model_path, file_name="%s.pth" % self.name, # TODO: this doesn't use `path` at all?
progress=True) filename = load_file_from_url(self.model_url, model_dir=self.model_download_path, file_name=f"{self.name}.pth")
else: else:
filename = path filename = path
if not os.path.exists(os.path.join(self.model_path, filename)) or filename is None: model = SCUNet(in_nc=3, config=[4, 4, 4, 4, 4, 4, 4], dim=64)
print(f"ScuNET: Unable to load model from {filename}", file=sys.stderr)
return None
model = net(in_nc=3, config=[4, 4, 4, 4, 4, 4, 4], dim=64)
model.load_state_dict(torch.load(filename), strict=True) model.load_state_dict(torch.load(filename), strict=True)
model.eval() model.eval()
for k, v in model.named_parameters(): for _, v in model.named_parameters():
v.requires_grad = False v.requires_grad = False
model = model.to(device) model = model.to(device)
return model return model
def on_ui_settings():
import gradio as gr
from modules import shared
shared.opts.add_option("SCUNET_tile", shared.OptionInfo(256, "Tile size for SCUNET upscalers.", gr.Slider, {"minimum": 0, "maximum": 512, "step": 16}, section=('upscaling', "Upscaling")).info("0 = no tiling"))
shared.opts.add_option("SCUNET_tile_overlap", shared.OptionInfo(8, "Tile overlap for SCUNET upscalers.", gr.Slider, {"minimum": 0, "maximum": 64, "step": 1}, section=('upscaling', "Upscaling")).info("Low values = visible seam"))
script_callbacks.on_ui_settings(on_ui_settings)

View File

@ -61,7 +61,9 @@ class WMSA(nn.Module):
Returns: Returns:
output: tensor shape [b h w c] output: tensor shape [b h w c]
""" """
if self.type != 'W': x = torch.roll(x, shifts=(-(self.window_size // 2), -(self.window_size // 2)), dims=(1, 2)) if self.type != 'W':
x = torch.roll(x, shifts=(-(self.window_size // 2), -(self.window_size // 2)), dims=(1, 2))
x = rearrange(x, 'b (w1 p1) (w2 p2) c -> b w1 w2 p1 p2 c', p1=self.window_size, p2=self.window_size) x = rearrange(x, 'b (w1 p1) (w2 p2) c -> b w1 w2 p1 p2 c', p1=self.window_size, p2=self.window_size)
h_windows = x.size(1) h_windows = x.size(1)
w_windows = x.size(2) w_windows = x.size(2)
@ -85,8 +87,9 @@ class WMSA(nn.Module):
output = self.linear(output) output = self.linear(output)
output = rearrange(output, 'b (w1 w2) (p1 p2) c -> b (w1 p1) (w2 p2) c', w1=h_windows, p1=self.window_size) output = rearrange(output, 'b (w1 w2) (p1 p2) c -> b (w1 p1) (w2 p2) c', w1=h_windows, p1=self.window_size)
if self.type != 'W': output = torch.roll(output, shifts=(self.window_size // 2, self.window_size // 2), if self.type != 'W':
dims=(1, 2)) output = torch.roll(output, shifts=(self.window_size // 2, self.window_size // 2), dims=(1, 2))
return output return output
def relative_embedding(self): def relative_embedding(self):

View File

@ -1,35 +1,35 @@
import contextlib import sys
import os import platform
import numpy as np import numpy as np
import torch import torch
from PIL import Image from PIL import Image
from basicsr.utils.download_util import load_file_from_url
from tqdm import tqdm from tqdm import tqdm
from modules import modelloader, devices, script_callbacks, shared from modules import modelloader, devices, script_callbacks, shared
from modules.shared import cmd_opts, opts, state from modules.shared import opts, state
from swinir_model_arch import SwinIR as net from swinir_model_arch import SwinIR
from swinir_model_arch_v2 import Swin2SR as net2 from swinir_model_arch_v2 import Swin2SR
from modules.upscaler import Upscaler, UpscalerData from modules.upscaler import Upscaler, UpscalerData
SWINIR_MODEL_URL = "https://ghproxy.com/https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR-L_x4_GAN.pth"
device_swinir = devices.get_device_for('swinir') device_swinir = devices.get_device_for('swinir')
class UpscalerSwinIR(Upscaler): class UpscalerSwinIR(Upscaler):
def __init__(self, dirname): def __init__(self, dirname):
self._cached_model = None # keep the model when SWIN_torch_compile is on to prevent re-compile every runs
self._cached_model_config = None # to clear '_cached_model' when changing model (v1/v2) or settings
self.name = "SwinIR" self.name = "SwinIR"
self.model_url = "https://github.com/JingyunLiang/SwinIR/releases/download/v0.0" \ self.model_url = SWINIR_MODEL_URL
"/003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR" \
"-L_x4_GAN.pth "
self.model_name = "SwinIR 4x" self.model_name = "SwinIR 4x"
self.user_path = dirname self.user_path = dirname
super().__init__() super().__init__()
scalers = [] scalers = []
model_files = self.find_models(ext_filter=[".pt", ".pth"]) model_files = self.find_models(ext_filter=[".pt", ".pth"])
for model in model_files: for model in model_files:
if "http" in model: if model.startswith("http"):
name = self.model_name name = self.model_name
else: else:
name = modelloader.friendly_name(model) name = modelloader.friendly_name(model)
@ -38,27 +38,39 @@ class UpscalerSwinIR(Upscaler):
self.scalers = scalers self.scalers = scalers
def do_upscale(self, img, model_file): def do_upscale(self, img, model_file):
use_compile = hasattr(opts, 'SWIN_torch_compile') and opts.SWIN_torch_compile \
and int(torch.__version__.split('.')[0]) >= 2 and platform.system() != "Windows"
current_config = (model_file, opts.SWIN_tile)
if use_compile and self._cached_model_config == current_config:
model = self._cached_model
else:
self._cached_model = None
try:
model = self.load_model(model_file) model = self.load_model(model_file)
if model is None: except Exception as e:
print(f"Failed loading SwinIR model {model_file}: {e}", file=sys.stderr)
return img return img
model = model.to(device_swinir, dtype=devices.dtype) model = model.to(device_swinir, dtype=devices.dtype)
if use_compile:
model = torch.compile(model)
self._cached_model = model
self._cached_model_config = current_config
img = upscale(img, model) img = upscale(img, model)
try: devices.torch_gc()
torch.cuda.empty_cache()
except:
pass
return img return img
def load_model(self, path, scale=4): def load_model(self, path, scale=4):
if "http" in path: if path.startswith("http"):
dl_name = "%s%s" % (self.model_name.replace(" ", "_"), ".pth") filename = modelloader.load_file_from_url(
filename = load_file_from_url(url=path, model_dir=self.model_path, file_name=dl_name, progress=True) url=path,
model_dir=self.model_download_path,
file_name=f"{self.model_name.replace(' ', '_')}.pth",
)
else: else:
filename = path filename = path
if filename is None or not os.path.exists(filename):
return None
if filename.endswith(".v2.pth"): if filename.endswith(".v2.pth"):
model = net2( model = Swin2SR(
upscale=scale, upscale=scale,
in_chans=3, in_chans=3,
img_size=64, img_size=64,
@ -73,7 +85,7 @@ class UpscalerSwinIR(Upscaler):
) )
params = None params = None
else: else:
model = net( model = SwinIR(
upscale=scale, upscale=scale,
in_chans=3, in_chans=3,
img_size=64, img_size=64,
@ -173,6 +185,8 @@ def on_ui_settings():
shared.opts.add_option("SWIN_tile", shared.OptionInfo(192, "Tile size for all SwinIR.", gr.Slider, {"minimum": 16, "maximum": 512, "step": 16}, section=('upscaling', "Upscaling"))) shared.opts.add_option("SWIN_tile", shared.OptionInfo(192, "Tile size for all SwinIR.", gr.Slider, {"minimum": 16, "maximum": 512, "step": 16}, section=('upscaling', "Upscaling")))
shared.opts.add_option("SWIN_tile_overlap", shared.OptionInfo(8, "Tile overlap, in pixels for SwinIR. Low values = visible seam.", gr.Slider, {"minimum": 0, "maximum": 48, "step": 1}, section=('upscaling', "Upscaling"))) shared.opts.add_option("SWIN_tile_overlap", shared.OptionInfo(8, "Tile overlap, in pixels for SwinIR. Low values = visible seam.", gr.Slider, {"minimum": 0, "maximum": 48, "step": 1}, section=('upscaling', "Upscaling")))
if int(torch.__version__.split('.')[0]) >= 2 and platform.system() != "Windows": # torch.compile() require pytorch 2.0 or above, and not on Windows
shared.opts.add_option("SWIN_torch_compile", shared.OptionInfo(False, "Use torch.compile to accelerate SwinIR.", gr.Checkbox, {"interactive": True}, section=('upscaling', "Upscaling")).info("Takes longer on first run"))
script_callbacks.on_ui_settings(on_ui_settings) script_callbacks.on_ui_settings(on_ui_settings)

View File

@ -644,7 +644,7 @@ class SwinIR(nn.Module):
""" """
def __init__(self, img_size=64, patch_size=1, in_chans=3, def __init__(self, img_size=64, patch_size=1, in_chans=3,
embed_dim=96, depths=[6, 6, 6, 6], num_heads=[6, 6, 6, 6], embed_dim=96, depths=(6, 6, 6, 6), num_heads=(6, 6, 6, 6),
window_size=7, mlp_ratio=4., qkv_bias=True, qk_scale=None, window_size=7, mlp_ratio=4., qkv_bias=True, qk_scale=None,
drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1, drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1,
norm_layer=nn.LayerNorm, ape=False, patch_norm=True, norm_layer=nn.LayerNorm, ape=False, patch_norm=True,
@ -844,7 +844,7 @@ class SwinIR(nn.Module):
H, W = self.patches_resolution H, W = self.patches_resolution
flops += H * W * 3 * self.embed_dim * 9 flops += H * W * 3 * self.embed_dim * 9
flops += self.patch_embed.flops() flops += self.patch_embed.flops()
for i, layer in enumerate(self.layers): for layer in self.layers:
flops += layer.flops() flops += layer.flops()
flops += H * W * 3 * self.embed_dim * self.embed_dim flops += H * W * 3 * self.embed_dim * self.embed_dim
flops += self.upsample.flops() flops += self.upsample.flops()

View File

@ -74,7 +74,7 @@ class WindowAttention(nn.Module):
""" """
def __init__(self, dim, window_size, num_heads, qkv_bias=True, attn_drop=0., proj_drop=0., def __init__(self, dim, window_size, num_heads, qkv_bias=True, attn_drop=0., proj_drop=0.,
pretrained_window_size=[0, 0]): pretrained_window_size=(0, 0)):
super().__init__() super().__init__()
self.dim = dim self.dim = dim
@ -698,7 +698,7 @@ class Swin2SR(nn.Module):
""" """
def __init__(self, img_size=64, patch_size=1, in_chans=3, def __init__(self, img_size=64, patch_size=1, in_chans=3,
embed_dim=96, depths=[6, 6, 6, 6], num_heads=[6, 6, 6, 6], embed_dim=96, depths=(6, 6, 6, 6), num_heads=(6, 6, 6, 6),
window_size=7, mlp_ratio=4., qkv_bias=True, window_size=7, mlp_ratio=4., qkv_bias=True,
drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1, drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1,
norm_layer=nn.LayerNorm, ape=False, patch_norm=True, norm_layer=nn.LayerNorm, ape=False, patch_norm=True,
@ -994,7 +994,7 @@ class Swin2SR(nn.Module):
H, W = self.patches_resolution H, W = self.patches_resolution
flops += H * W * 3 * self.embed_dim * 9 flops += H * W * 3 * self.embed_dim * 9
flops += self.patch_embed.flops() flops += self.patch_embed.flops()
for i, layer in enumerate(self.layers): for layer in self.layers:
flops += layer.flops() flops += layer.flops()
flops += H * W * 3 * self.embed_dim * self.embed_dim flops += H * W * 3 * self.embed_dim * self.embed_dim
flops += self.upsample.flops() flops += self.upsample.flops()

View File

@ -0,0 +1,776 @@
onUiLoaded(async() => {
const elementIDs = {
img2imgTabs: "#mode_img2img .tab-nav",
inpaint: "#img2maskimg",
inpaintSketch: "#inpaint_sketch",
rangeGroup: "#img2img_column_size",
sketch: "#img2img_sketch"
};
const tabNameToElementId = {
"Inpaint sketch": elementIDs.inpaintSketch,
"Inpaint": elementIDs.inpaint,
"Sketch": elementIDs.sketch
};
// Helper functions
// Get active tab
function getActiveTab(elements, all = false) {
const tabs = elements.img2imgTabs.querySelectorAll("button");
if (all) return tabs;
for (let tab of tabs) {
if (tab.classList.contains("selected")) {
return tab;
}
}
}
// Get tab ID
function getTabId(elements) {
const activeTab = getActiveTab(elements);
return tabNameToElementId[activeTab.innerText];
}
// Wait until opts loaded
async function waitForOpts() {
for (;;) {
if (window.opts && Object.keys(window.opts).length) {
return window.opts;
}
await new Promise(resolve => setTimeout(resolve, 100));
}
}
// Function for defining the "Ctrl", "Shift" and "Alt" keys
function isModifierKey(event, key) {
switch (key) {
case "Ctrl":
return event.ctrlKey;
case "Shift":
return event.shiftKey;
case "Alt":
return event.altKey;
default:
return false;
}
}
// Check if hotkey is valid
function isValidHotkey(value) {
const specialKeys = ["Ctrl", "Alt", "Shift", "Disable"];
return (
(typeof value === "string" &&
value.length === 1 &&
/[a-z]/i.test(value)) ||
specialKeys.includes(value)
);
}
// Normalize hotkey
function normalizeHotkey(hotkey) {
return hotkey.length === 1 ? "Key" + hotkey.toUpperCase() : hotkey;
}
// Format hotkey for display
function formatHotkeyForDisplay(hotkey) {
return hotkey.startsWith("Key") ? hotkey.slice(3) : hotkey;
}
// Create hotkey configuration with the provided options
function createHotkeyConfig(defaultHotkeysConfig, hotkeysConfigOpts) {
const result = {}; // Resulting hotkey configuration
const usedKeys = new Set(); // Set of used hotkeys
// Iterate through defaultHotkeysConfig keys
for (const key in defaultHotkeysConfig) {
const userValue = hotkeysConfigOpts[key]; // User-provided hotkey value
const defaultValue = defaultHotkeysConfig[key]; // Default hotkey value
// Apply appropriate value for undefined, boolean, or object userValue
if (
userValue === undefined ||
typeof userValue === "boolean" ||
typeof userValue === "object" ||
userValue === "disable"
) {
result[key] =
userValue === undefined ? defaultValue : userValue;
} else if (isValidHotkey(userValue)) {
const normalizedUserValue = normalizeHotkey(userValue);
// Check for conflicting hotkeys
if (!usedKeys.has(normalizedUserValue)) {
usedKeys.add(normalizedUserValue);
result[key] = normalizedUserValue;
} else {
console.error(
`Hotkey: ${formatHotkeyForDisplay(
userValue
)} for ${key} is repeated and conflicts with another hotkey. The default hotkey is used: ${formatHotkeyForDisplay(
defaultValue
)}`
);
result[key] = defaultValue;
}
} else {
console.error(
`Hotkey: ${formatHotkeyForDisplay(
userValue
)} for ${key} is not valid. The default hotkey is used: ${formatHotkeyForDisplay(
defaultValue
)}`
);
result[key] = defaultValue;
}
}
return result;
}
// Disables functions in the config object based on the provided list of function names
function disableFunctions(config, disabledFunctions) {
// Bind the hasOwnProperty method to the functionMap object to avoid errors
const hasOwnProperty =
Object.prototype.hasOwnProperty.bind(functionMap);
// Loop through the disabledFunctions array and disable the corresponding functions in the config object
disabledFunctions.forEach(funcName => {
if (hasOwnProperty(funcName)) {
const key = functionMap[funcName];
config[key] = "disable";
}
});
// Return the updated config object
return config;
}
/**
* The restoreImgRedMask function displays a red mask around an image to indicate the aspect ratio.
* If the image display property is set to 'none', the mask breaks. To fix this, the function
* temporarily sets the display property to 'block' and then hides the mask again after 300 milliseconds
* to avoid breaking the canvas. Additionally, the function adjusts the mask to work correctly on
* very long images.
*/
function restoreImgRedMask(elements) {
const mainTabId = getTabId(elements);
if (!mainTabId) return;
const mainTab = gradioApp().querySelector(mainTabId);
const img = mainTab.querySelector("img");
const imageARPreview = gradioApp().querySelector("#imageARPreview");
if (!img || !imageARPreview) return;
imageARPreview.style.transform = "";
if (parseFloat(mainTab.style.width) > 865) {
const transformString = mainTab.style.transform;
const scaleMatch = transformString.match(
/scale\(([-+]?[0-9]*\.?[0-9]+)\)/
);
let zoom = 1; // default zoom
if (scaleMatch && scaleMatch[1]) {
zoom = Number(scaleMatch[1]);
}
imageARPreview.style.transformOrigin = "0 0";
imageARPreview.style.transform = `scale(${zoom})`;
}
if (img.style.display !== "none") return;
img.style.display = "block";
setTimeout(() => {
img.style.display = "none";
}, 400);
}
const hotkeysConfigOpts = await waitForOpts();
// Default config
const defaultHotkeysConfig = {
canvas_hotkey_zoom: "Alt",
canvas_hotkey_adjust: "Ctrl",
canvas_hotkey_reset: "KeyR",
canvas_hotkey_fullscreen: "KeyS",
canvas_hotkey_move: "KeyF",
canvas_hotkey_overlap: "KeyO",
canvas_disabled_functions: [],
canvas_show_tooltip: true,
canvas_blur_prompt: false
};
const functionMap = {
"Zoom": "canvas_hotkey_zoom",
"Adjust brush size": "canvas_hotkey_adjust",
"Moving canvas": "canvas_hotkey_move",
"Fullscreen": "canvas_hotkey_fullscreen",
"Reset Zoom": "canvas_hotkey_reset",
"Overlap": "canvas_hotkey_overlap"
};
// Loading the configuration from opts
const preHotkeysConfig = createHotkeyConfig(
defaultHotkeysConfig,
hotkeysConfigOpts
);
// Disable functions that are not needed by the user
const hotkeysConfig = disableFunctions(
preHotkeysConfig,
preHotkeysConfig.canvas_disabled_functions
);
let isMoving = false;
let mouseX, mouseY;
let activeElement;
const elements = Object.fromEntries(
Object.keys(elementIDs).map(id => [
id,
gradioApp().querySelector(elementIDs[id])
])
);
const elemData = {};
// Apply functionality to the range inputs. Restore redmask and correct for long images.
const rangeInputs = elements.rangeGroup ?
Array.from(elements.rangeGroup.querySelectorAll("input")) :
[
gradioApp().querySelector("#img2img_width input[type='range']"),
gradioApp().querySelector("#img2img_height input[type='range']")
];
for (const input of rangeInputs) {
input?.addEventListener("input", () => restoreImgRedMask(elements));
}
function applyZoomAndPan(elemId) {
const targetElement = gradioApp().querySelector(elemId);
if (!targetElement) {
console.log("Element not found");
return;
}
targetElement.style.transformOrigin = "0 0";
elemData[elemId] = {
zoom: 1,
panX: 0,
panY: 0
};
let fullScreenMode = false;
// Create tooltip
function createTooltip() {
const toolTipElemnt =
targetElement.querySelector(".image-container");
const tooltip = document.createElement("div");
tooltip.className = "canvas-tooltip";
// Creating an item of information
const info = document.createElement("i");
info.className = "canvas-tooltip-info";
info.textContent = "";
// Create a container for the contents of the tooltip
const tooltipContent = document.createElement("div");
tooltipContent.className = "canvas-tooltip-content";
// Define an array with hotkey information and their actions
const hotkeysInfo = [
{
configKey: "canvas_hotkey_zoom",
action: "Zoom canvas",
keySuffix: " + wheel"
},
{
configKey: "canvas_hotkey_adjust",
action: "Adjust brush size",
keySuffix: " + wheel"
},
{configKey: "canvas_hotkey_reset", action: "Reset zoom"},
{
configKey: "canvas_hotkey_fullscreen",
action: "Fullscreen mode"
},
{configKey: "canvas_hotkey_move", action: "Move canvas"},
{configKey: "canvas_hotkey_overlap", action: "Overlap"}
];
// Create hotkeys array with disabled property based on the config values
const hotkeys = hotkeysInfo.map(info => {
const configValue = hotkeysConfig[info.configKey];
const key = info.keySuffix ?
`${configValue}${info.keySuffix}` :
configValue.charAt(configValue.length - 1);
return {
key,
action: info.action,
disabled: configValue === "disable"
};
});
for (const hotkey of hotkeys) {
if (hotkey.disabled) {
continue;
}
const p = document.createElement("p");
p.innerHTML = `<b>${hotkey.key}</b> - ${hotkey.action}`;
tooltipContent.appendChild(p);
}
// Add information and content elements to the tooltip element
tooltip.appendChild(info);
tooltip.appendChild(tooltipContent);
// Add a hint element to the target element
toolTipElemnt.appendChild(tooltip);
}
//Show tool tip if setting enable
if (hotkeysConfig.canvas_show_tooltip) {
createTooltip();
}
// In the course of research, it was found that the tag img is very harmful when zooming and creates white canvases. This hack allows you to almost never think about this problem, it has no effect on webui.
function fixCanvas() {
const activeTab = getActiveTab(elements).textContent.trim();
if (activeTab !== "img2img") {
const img = targetElement.querySelector(`${elemId} img`);
if (img && img.style.display !== "none") {
img.style.display = "none";
img.style.visibility = "hidden";
}
}
}
// Reset the zoom level and pan position of the target element to their initial values
function resetZoom() {
elemData[elemId] = {
zoomLevel: 1,
panX: 0,
panY: 0
};
fixCanvas();
targetElement.style.transform = `scale(${elemData[elemId].zoomLevel}) translate(${elemData[elemId].panX}px, ${elemData[elemId].panY}px)`;
const canvas = gradioApp().querySelector(
`${elemId} canvas[key="interface"]`
);
toggleOverlap("off");
fullScreenMode = false;
if (
canvas &&
parseFloat(canvas.style.width) > 865 &&
parseFloat(targetElement.style.width) > 865
) {
fitToElement();
return;
}
targetElement.style.width = "";
if (canvas) {
targetElement.style.height = canvas.style.height;
}
}
// Toggle the zIndex of the target element between two values, allowing it to overlap or be overlapped by other elements
function toggleOverlap(forced = "") {
const zIndex1 = "0";
const zIndex2 = "998";
targetElement.style.zIndex =
targetElement.style.zIndex !== zIndex2 ? zIndex2 : zIndex1;
if (forced === "off") {
targetElement.style.zIndex = zIndex1;
} else if (forced === "on") {
targetElement.style.zIndex = zIndex2;
}
}
// Adjust the brush size based on the deltaY value from a mouse wheel event
function adjustBrushSize(
elemId,
deltaY,
withoutValue = false,
percentage = 5
) {
const input =
gradioApp().querySelector(
`${elemId} input[aria-label='Brush radius']`
) ||
gradioApp().querySelector(
`${elemId} button[aria-label="Use brush"]`
);
if (input) {
input.click();
if (!withoutValue) {
const maxValue =
parseFloat(input.getAttribute("max")) || 100;
const changeAmount = maxValue * (percentage / 100);
const newValue =
parseFloat(input.value) +
(deltaY > 0 ? -changeAmount : changeAmount);
input.value = Math.min(Math.max(newValue, 0), maxValue);
input.dispatchEvent(new Event("change"));
}
}
}
// Reset zoom when uploading a new image
const fileInput = gradioApp().querySelector(
`${elemId} input[type="file"][accept="image/*"].svelte-116rqfv`
);
fileInput.addEventListener("click", resetZoom);
// Update the zoom level and pan position of the target element based on the values of the zoomLevel, panX and panY variables
function updateZoom(newZoomLevel, mouseX, mouseY) {
newZoomLevel = Math.max(0.5, Math.min(newZoomLevel, 15));
elemData[elemId].panX +=
mouseX - (mouseX * newZoomLevel) / elemData[elemId].zoomLevel;
elemData[elemId].panY +=
mouseY - (mouseY * newZoomLevel) / elemData[elemId].zoomLevel;
targetElement.style.transformOrigin = "0 0";
targetElement.style.transform = `translate(${elemData[elemId].panX}px, ${elemData[elemId].panY}px) scale(${newZoomLevel})`;
toggleOverlap("on");
return newZoomLevel;
}
// Change the zoom level based on user interaction
function changeZoomLevel(operation, e) {
if (isModifierKey(e, hotkeysConfig.canvas_hotkey_zoom)) {
e.preventDefault();
let zoomPosX, zoomPosY;
let delta = 0.2;
if (elemData[elemId].zoomLevel > 7) {
delta = 0.9;
} else if (elemData[elemId].zoomLevel > 2) {
delta = 0.6;
}
zoomPosX = e.clientX;
zoomPosY = e.clientY;
fullScreenMode = false;
elemData[elemId].zoomLevel = updateZoom(
elemData[elemId].zoomLevel +
(operation === "+" ? delta : -delta),
zoomPosX - targetElement.getBoundingClientRect().left,
zoomPosY - targetElement.getBoundingClientRect().top
);
}
}
/**
* This function fits the target element to the screen by calculating
* the required scale and offsets. It also updates the global variables
* zoomLevel, panX, and panY to reflect the new state.
*/
function fitToElement() {
//Reset Zoom
targetElement.style.transform = `translate(${0}px, ${0}px) scale(${1})`;
// Get element and screen dimensions
const elementWidth = targetElement.offsetWidth;
const elementHeight = targetElement.offsetHeight;
const parentElement = targetElement.parentElement;
const screenWidth = parentElement.clientWidth;
const screenHeight = parentElement.clientHeight;
// Get element's coordinates relative to the parent element
const elementRect = targetElement.getBoundingClientRect();
const parentRect = parentElement.getBoundingClientRect();
const elementX = elementRect.x - parentRect.x;
// Calculate scale and offsets
const scaleX = screenWidth / elementWidth;
const scaleY = screenHeight / elementHeight;
const scale = Math.min(scaleX, scaleY);
const transformOrigin =
window.getComputedStyle(targetElement).transformOrigin;
const [originX, originY] = transformOrigin.split(" ");
const originXValue = parseFloat(originX);
const originYValue = parseFloat(originY);
const offsetX =
(screenWidth - elementWidth * scale) / 2 -
originXValue * (1 - scale);
const offsetY =
(screenHeight - elementHeight * scale) / 2.5 -
originYValue * (1 - scale);
// Apply scale and offsets to the element
targetElement.style.transform = `translate(${offsetX}px, ${offsetY}px) scale(${scale})`;
// Update global variables
elemData[elemId].zoomLevel = scale;
elemData[elemId].panX = offsetX;
elemData[elemId].panY = offsetY;
fullScreenMode = false;
toggleOverlap("off");
}
/**
* This function fits the target element to the screen by calculating
* the required scale and offsets. It also updates the global variables
* zoomLevel, panX, and panY to reflect the new state.
*/
// Fullscreen mode
function fitToScreen() {
const canvas = gradioApp().querySelector(
`${elemId} canvas[key="interface"]`
);
if (!canvas) return;
if (canvas.offsetWidth > 862) {
targetElement.style.width = canvas.offsetWidth + "px";
}
if (fullScreenMode) {
resetZoom();
fullScreenMode = false;
return;
}
//Reset Zoom
targetElement.style.transform = `translate(${0}px, ${0}px) scale(${1})`;
// Get scrollbar width to right-align the image
const scrollbarWidth =
window.innerWidth - document.documentElement.clientWidth;
// Get element and screen dimensions
const elementWidth = targetElement.offsetWidth;
const elementHeight = targetElement.offsetHeight;
const screenWidth = window.innerWidth - scrollbarWidth;
const screenHeight = window.innerHeight;
// Get element's coordinates relative to the page
const elementRect = targetElement.getBoundingClientRect();
const elementY = elementRect.y;
const elementX = elementRect.x;
// Calculate scale and offsets
const scaleX = screenWidth / elementWidth;
const scaleY = screenHeight / elementHeight;
const scale = Math.min(scaleX, scaleY);
// Get the current transformOrigin
const computedStyle = window.getComputedStyle(targetElement);
const transformOrigin = computedStyle.transformOrigin;
const [originX, originY] = transformOrigin.split(" ");
const originXValue = parseFloat(originX);
const originYValue = parseFloat(originY);
// Calculate offsets with respect to the transformOrigin
const offsetX =
(screenWidth - elementWidth * scale) / 2 -
elementX -
originXValue * (1 - scale);
const offsetY =
(screenHeight - elementHeight * scale) / 2 -
elementY -
originYValue * (1 - scale);
// Apply scale and offsets to the element
targetElement.style.transform = `translate(${offsetX}px, ${offsetY}px) scale(${scale})`;
// Update global variables
elemData[elemId].zoomLevel = scale;
elemData[elemId].panX = offsetX;
elemData[elemId].panY = offsetY;
fullScreenMode = true;
toggleOverlap("on");
}
// Handle keydown events
function handleKeyDown(event) {
// Disable key locks to make pasting from the buffer work correctly
if ((event.ctrlKey && event.code === 'KeyV') || (event.ctrlKey && event.code === 'KeyC') || event.code === "F5") {
return;
}
// before activating shortcut, ensure user is not actively typing in an input field
if (!hotkeysConfig.canvas_blur_prompt) {
if (event.target.nodeName === 'TEXTAREA' || event.target.nodeName === 'INPUT') {
return;
}
}
const hotkeyActions = {
[hotkeysConfig.canvas_hotkey_reset]: resetZoom,
[hotkeysConfig.canvas_hotkey_overlap]: toggleOverlap,
[hotkeysConfig.canvas_hotkey_fullscreen]: fitToScreen
};
const action = hotkeyActions[event.code];
if (action) {
event.preventDefault();
action(event);
}
if (
isModifierKey(event, hotkeysConfig.canvas_hotkey_zoom) ||
isModifierKey(event, hotkeysConfig.canvas_hotkey_adjust)
) {
event.preventDefault();
}
}
// Get Mouse position
function getMousePosition(e) {
mouseX = e.offsetX;
mouseY = e.offsetY;
}
targetElement.addEventListener("mousemove", getMousePosition);
// Handle events only inside the targetElement
let isKeyDownHandlerAttached = false;
function handleMouseMove() {
if (!isKeyDownHandlerAttached) {
document.addEventListener("keydown", handleKeyDown);
isKeyDownHandlerAttached = true;
activeElement = elemId;
}
}
function handleMouseLeave() {
if (isKeyDownHandlerAttached) {
document.removeEventListener("keydown", handleKeyDown);
isKeyDownHandlerAttached = false;
activeElement = null;
}
}
// Add mouse event handlers
targetElement.addEventListener("mousemove", handleMouseMove);
targetElement.addEventListener("mouseleave", handleMouseLeave);
// Reset zoom when click on another tab
elements.img2imgTabs.addEventListener("click", resetZoom);
elements.img2imgTabs.addEventListener("click", () => {
// targetElement.style.width = "";
if (parseInt(targetElement.style.width) > 865) {
setTimeout(fitToElement, 0);
}
});
targetElement.addEventListener("wheel", e => {
// change zoom level
const operation = e.deltaY > 0 ? "-" : "+";
changeZoomLevel(operation, e);
// Handle brush size adjustment with ctrl key pressed
if (isModifierKey(e, hotkeysConfig.canvas_hotkey_adjust)) {
e.preventDefault();
// Increase or decrease brush size based on scroll direction
adjustBrushSize(elemId, e.deltaY);
}
});
// Handle the move event for pan functionality. Updates the panX and panY variables and applies the new transform to the target element.
function handleMoveKeyDown(e) {
// Disable key locks to make pasting from the buffer work correctly
if ((e.ctrlKey && e.code === 'KeyV') || (e.ctrlKey && event.code === 'KeyC') || e.code === "F5") {
return;
}
// before activating shortcut, ensure user is not actively typing in an input field
if (!hotkeysConfig.canvas_blur_prompt) {
if (e.target.nodeName === 'TEXTAREA' || e.target.nodeName === 'INPUT') {
return;
}
}
if (e.code === hotkeysConfig.canvas_hotkey_move) {
if (!e.ctrlKey && !e.metaKey && isKeyDownHandlerAttached) {
e.preventDefault();
document.activeElement.blur();
isMoving = true;
}
}
}
function handleMoveKeyUp(e) {
if (e.code === hotkeysConfig.canvas_hotkey_move) {
isMoving = false;
}
}
document.addEventListener("keydown", handleMoveKeyDown);
document.addEventListener("keyup", handleMoveKeyUp);
// Detect zoom level and update the pan speed.
function updatePanPosition(movementX, movementY) {
let panSpeed = 2;
if (elemData[elemId].zoomLevel > 8) {
panSpeed = 3.5;
}
elemData[elemId].panX += movementX * panSpeed;
elemData[elemId].panY += movementY * panSpeed;
// Delayed redraw of an element
requestAnimationFrame(() => {
targetElement.style.transform = `translate(${elemData[elemId].panX}px, ${elemData[elemId].panY}px) scale(${elemData[elemId].zoomLevel})`;
toggleOverlap("on");
});
}
function handleMoveByKey(e) {
if (isMoving && elemId === activeElement) {
updatePanPosition(e.movementX, e.movementY);
targetElement.style.pointerEvents = "none";
} else {
targetElement.style.pointerEvents = "auto";
}
}
// Prevents sticking to the mouse
window.onblur = function() {
isMoving = false;
};
gradioApp().addEventListener("mousemove", handleMoveByKey);
}
applyZoomAndPan(elementIDs.sketch);
applyZoomAndPan(elementIDs.inpaint);
applyZoomAndPan(elementIDs.inpaintSketch);
// Make the function global so that other extensions can take advantage of this solution
window.applyZoomAndPan = applyZoomAndPan;
});

View File

@ -0,0 +1,14 @@
import gradio as gr
from modules import shared
shared.options_templates.update(shared.options_section(('canvas_hotkey', "Canvas Hotkeys"), {
"canvas_hotkey_zoom": shared.OptionInfo("Alt", "Zoom canvas", gr.Radio, {"choices": ["Shift","Ctrl", "Alt"]}).info("If you choose 'Shift' you cannot scroll horizontally, 'Alt' can cause a little trouble in firefox"),
"canvas_hotkey_adjust": shared.OptionInfo("Ctrl", "Adjust brush size", gr.Radio, {"choices": ["Shift","Ctrl", "Alt"]}).info("If you choose 'Shift' you cannot scroll horizontally, 'Alt' can cause a little trouble in firefox"),
"canvas_hotkey_move": shared.OptionInfo("F", "Moving the canvas").info("To work correctly in firefox, turn off 'Automatically search the page text when typing' in the browser settings"),
"canvas_hotkey_fullscreen": shared.OptionInfo("S", "Fullscreen Mode, maximizes the picture so that it fits into the screen and stretches it to its full width "),
"canvas_hotkey_reset": shared.OptionInfo("R", "Reset zoom and canvas positon"),
"canvas_hotkey_overlap": shared.OptionInfo("O", "Toggle overlap").info("Technical button, neededs for testing"),
"canvas_show_tooltip": shared.OptionInfo(True, "Enable tooltip on the canvas"),
"canvas_blur_prompt": shared.OptionInfo(False, "Take the focus off the prompt when working with a canvas"),
"canvas_disabled_functions": shared.OptionInfo(["Overlap"], "Disable function that you don't use", gr.CheckboxGroup, {"choices": ["Zoom","Adjust brush size", "Moving canvas","Fullscreen","Reset Zoom","Overlap"]}),
}))

View File

@ -0,0 +1,63 @@
.canvas-tooltip-info {
position: absolute;
top: 10px;
left: 10px;
cursor: help;
background-color: rgba(0, 0, 0, 0.3);
width: 20px;
height: 20px;
border-radius: 50%;
display: flex;
align-items: center;
justify-content: center;
flex-direction: column;
z-index: 100;
}
.canvas-tooltip-info::after {
content: '';
display: block;
width: 2px;
height: 7px;
background-color: white;
margin-top: 2px;
}
.canvas-tooltip-info::before {
content: '';
display: block;
width: 2px;
height: 2px;
background-color: white;
}
.canvas-tooltip-content {
display: none;
background-color: #f9f9f9;
color: #333;
border: 1px solid #ddd;
padding: 15px;
position: absolute;
top: 40px;
left: 10px;
width: 250px;
font-size: 16px;
opacity: 0;
border-radius: 8px;
box-shadow: 0px 8px 16px 0px rgba(0,0,0,0.2);
z-index: 100;
}
.canvas-tooltip:hover .canvas-tooltip-content {
display: block;
animation: fadeIn 0.5s;
opacity: 1;
}
@keyframes fadeIn {
from {opacity: 0;}
to {opacity: 1;}
}

View File

@ -0,0 +1,48 @@
import gradio as gr
from modules import scripts, shared, ui_components, ui_settings
from modules.ui_components import FormColumn
class ExtraOptionsSection(scripts.Script):
section = "extra_options"
def __init__(self):
self.comps = None
self.setting_names = None
def title(self):
return "Extra options"
def show(self, is_img2img):
return scripts.AlwaysVisible
def ui(self, is_img2img):
self.comps = []
self.setting_names = []
with gr.Blocks() as interface:
with gr.Accordion("Options", open=False) if shared.opts.extra_options_accordion and shared.opts.extra_options else gr.Group(), gr.Row():
for setting_name in shared.opts.extra_options:
with FormColumn():
comp = ui_settings.create_setting_component(setting_name)
self.comps.append(comp)
self.setting_names.append(setting_name)
def get_settings_values():
return [ui_settings.get_value_for_setting(key) for key in self.setting_names]
interface.load(fn=get_settings_values, inputs=[], outputs=self.comps, queue=False, show_progress=False)
return self.comps
def before_process(self, p, *args):
for name, value in zip(self.setting_names, args):
if name not in p.override_settings:
p.override_settings[name] = value
shared.options_templates.update(shared.options_section(('ui', "User interface"), {
"extra_options": shared.OptionInfo([], "Options in main UI", ui_components.DropdownMulti, lambda: {"choices": list(shared.opts.data_labels.keys())}).js("info", "settingsHintsShowQuicksettings").info("setting entries that also appear in txt2img/img2img interfaces").needs_restart(),
"extra_options_accordion": shared.OptionInfo(False, "Place options in main UI into an accordion")
}))

View File

@ -0,0 +1,26 @@
var isSetupForMobile = false;
function isMobile() {
for (var tab of ["txt2img", "img2img"]) {
var imageTab = gradioApp().getElementById(tab + '_results');
if (imageTab && imageTab.offsetParent && imageTab.offsetLeft == 0) {
return true;
}
}
return false;
}
function reportWindowSize() {
var currentlyMobile = isMobile();
if (currentlyMobile == isSetupForMobile) return;
isSetupForMobile = currentlyMobile;
for (var tab of ["txt2img", "img2img"]) {
var button = gradioApp().getElementById(tab + '_generate_box');
var target = gradioApp().getElementById(currentlyMobile ? tab + '_results' : tab + '_actions_column');
target.insertBefore(button, target.firstElementChild);
}
}
window.addEventListener("resize", reportWindowSize);

View File

@ -1,103 +1,42 @@
// Stable Diffusion WebUI - Bracket checker // Stable Diffusion WebUI - Bracket checker
// Version 1.0 // By Hingashi no Florin/Bwin4L & @akx
// By Hingashi no Florin/Bwin4L
// Counts open and closed brackets (round, square, curly) in the prompt and negative prompt text boxes in the txt2img and img2img tabs. // Counts open and closed brackets (round, square, curly) in the prompt and negative prompt text boxes in the txt2img and img2img tabs.
// If there's a mismatch, the keyword counter turns red and if you hover on it, a tooltip tells you what's wrong. // If there's a mismatch, the keyword counter turns red and if you hover on it, a tooltip tells you what's wrong.
function checkBrackets(evt, textArea, counterElt) { function checkBrackets(textArea, counterElt) {
errorStringParen = '(...) - Different number of opening and closing parentheses detected.\n'; var counts = {};
errorStringSquare = '[...] - Different number of opening and closing square brackets detected.\n'; (textArea.value.match(/[(){}[\]]/g) || []).forEach(bracket => {
errorStringCurly = '{...} - Different number of opening and closing curly brackets detected.\n'; counts[bracket] = (counts[bracket] || 0) + 1;
openBracketRegExp = /\(/g;
closeBracketRegExp = /\)/g;
openSquareBracketRegExp = /\[/g;
closeSquareBracketRegExp = /\]/g;
openCurlyBracketRegExp = /\{/g;
closeCurlyBracketRegExp = /\}/g;
totalOpenBracketMatches = 0;
totalCloseBracketMatches = 0;
totalOpenSquareBracketMatches = 0;
totalCloseSquareBracketMatches = 0;
totalOpenCurlyBracketMatches = 0;
totalCloseCurlyBracketMatches = 0;
openBracketMatches = textArea.value.match(openBracketRegExp);
if(openBracketMatches) {
totalOpenBracketMatches = openBracketMatches.length;
}
closeBracketMatches = textArea.value.match(closeBracketRegExp);
if(closeBracketMatches) {
totalCloseBracketMatches = closeBracketMatches.length;
}
openSquareBracketMatches = textArea.value.match(openSquareBracketRegExp);
if(openSquareBracketMatches) {
totalOpenSquareBracketMatches = openSquareBracketMatches.length;
}
closeSquareBracketMatches = textArea.value.match(closeSquareBracketRegExp);
if(closeSquareBracketMatches) {
totalCloseSquareBracketMatches = closeSquareBracketMatches.length;
}
openCurlyBracketMatches = textArea.value.match(openCurlyBracketRegExp);
if(openCurlyBracketMatches) {
totalOpenCurlyBracketMatches = openCurlyBracketMatches.length;
}
closeCurlyBracketMatches = textArea.value.match(closeCurlyBracketRegExp);
if(closeCurlyBracketMatches) {
totalCloseCurlyBracketMatches = closeCurlyBracketMatches.length;
}
if(totalOpenBracketMatches != totalCloseBracketMatches) {
if(!counterElt.title.includes(errorStringParen)) {
counterElt.title += errorStringParen;
}
} else {
counterElt.title = counterElt.title.replace(errorStringParen, '');
}
if(totalOpenSquareBracketMatches != totalCloseSquareBracketMatches) {
if(!counterElt.title.includes(errorStringSquare)) {
counterElt.title += errorStringSquare;
}
} else {
counterElt.title = counterElt.title.replace(errorStringSquare, '');
}
if(totalOpenCurlyBracketMatches != totalCloseCurlyBracketMatches) {
if(!counterElt.title.includes(errorStringCurly)) {
counterElt.title += errorStringCurly;
}
} else {
counterElt.title = counterElt.title.replace(errorStringCurly, '');
}
if(counterElt.title != '') {
counterElt.classList.add('error');
} else {
counterElt.classList.remove('error');
}
}
function setupBracketChecking(id_prompt, id_counter){
var textarea = gradioApp().querySelector("#" + id_prompt + " > label > textarea");
var counter = gradioApp().getElementById(id_counter)
textarea.addEventListener("input", function(evt){
checkBrackets(evt, textarea, counter)
}); });
var errors = [];
function checkPair(open, close, kind) {
if (counts[open] !== counts[close]) {
errors.push(
`${open}...${close} - Detected ${counts[open] || 0} opening and ${counts[close] || 0} closing ${kind}.`
);
}
}
checkPair('(', ')', 'round brackets');
checkPair('[', ']', 'square brackets');
checkPair('{', '}', 'curly brackets');
counterElt.title = errors.join('\n');
counterElt.classList.toggle('error', errors.length !== 0);
} }
onUiLoaded(function(){ function setupBracketChecking(id_prompt, id_counter) {
setupBracketChecking('txt2img_prompt', 'txt2img_token_counter') var textarea = gradioApp().querySelector("#" + id_prompt + " > label > textarea");
setupBracketChecking('txt2img_neg_prompt', 'txt2img_negative_token_counter') var counter = gradioApp().getElementById(id_counter);
setupBracketChecking('img2img_prompt', 'img2img_token_counter')
setupBracketChecking('img2img_neg_prompt', 'img2img_negative_token_counter') if (textarea && counter) {
}) textarea.addEventListener("input", () => checkBrackets(textarea, counter));
}
}
onUiLoaded(function() {
setupBracketChecking('txt2img_prompt', 'txt2img_token_counter');
setupBracketChecking('txt2img_neg_prompt', 'txt2img_negative_token_counter');
setupBracketChecking('img2img_prompt', 'img2img_token_counter');
setupBracketChecking('img2img_neg_prompt', 'img2img_negative_token_counter');
});

View File

@ -1,15 +1,14 @@
<div class='card' style={style} onclick={card_clicked}> <div class='card' style={style} onclick={card_clicked} data-name="{name}" {sort_keys}>
{background_image}
<div class="button-row">
{metadata_button} {metadata_button}
{edit_button}
</div>
<div class='actions'> <div class='actions'>
<div class='additional'> <div class='additional'>
<ul> <span style="display:none" class='search_term{search_only}'>{search_term}</span>
<a href="#" title="replace preview image with currently selected in gallery" onclick={save_card_preview}>replace preview</a>
</ul>
<span style="display:none" class='search_term'>{search_term}</span>
</div> </div>
<span class='name'>{name}</span> <span class='name'>{name}</span>
<span class='description'>{description}</span> <span class='description'>{description}</span>
</div> </div>
</div> </div>

View File

@ -1,10 +1,12 @@
<div> <div>
<a href="/docs">API</a> <a href="{api_docs}">API</a>
 •   • 
<a href="https://github.com/AUTOMATIC1111/stable-diffusion-webui">Github</a> <a href="https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui">Github</a>
 •   • 
<a href="https://gradio.app">Gradio</a> <a href="https://gradio.app">Gradio</a>
 •   • 
<a href="#" onclick="showProfile('./internal/profile-startup'); return false;">Startup profile</a>
 • 
<a href="/" onclick="javascript:gradioApp().getElementById('settings_restart_gradio').click(); return false">Reload UI</a> <a href="/" onclick="javascript:gradioApp().getElementById('settings_restart_gradio').click(); return false">Reload UI</a>
</div> </div>
<br /> <br />

View File

@ -1,7 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24">
<filter id='shadow' color-interpolation-filters="sRGB">
<feDropShadow flood-color="black" dx="0" dy="0" flood-opacity="0.9" stdDeviation="0.5"/>
<feDropShadow flood-color="black" dx="0" dy="0" flood-opacity="0.9" stdDeviation="0.5"/>
</filter>
<path style="filter:url(#shadow);" fill="#FFFFFF" d="M13.18 19C13.35 19.72 13.64 20.39 14.03 21H5C3.9 21 3 20.11 3 19V5C3 3.9 3.9 3 5 3H19C20.11 3 21 3.9 21 5V11.18C20.5 11.07 20 11 19.5 11C19.33 11 19.17 11 19 11.03V5H5V19H13.18M11.21 15.83L9.25 13.47L6.5 17H13.03C13.14 15.54 13.73 14.22 14.64 13.19L13.96 12.29L11.21 15.83M19 13.5V12L16.75 14.25L19 16.5V15C20.38 15 21.5 16.12 21.5 17.5C21.5 17.9 21.41 18.28 21.24 18.62L22.33 19.71C22.75 19.08 23 18.32 23 17.5C23 15.29 21.21 13.5 19 13.5M19 20C17.62 20 16.5 18.88 16.5 17.5C16.5 17.1 16.59 16.72 16.76 16.38L15.67 15.29C15.25 15.92 15 16.68 15 17.5C15 19.71 16.79 21.5 19 21.5V23L21.25 20.75L19 18.5V20Z" />
</svg>

Before

Width:  |  Height:  |  Size: 989 B

View File

@ -4,7 +4,7 @@
#licenses pre { margin: 1em 0 2em 0;} #licenses pre { margin: 1em 0 2em 0;}
</style> </style>
<h2><a href="https://github.com/sczhou/CodeFormer/blob/master/LICENSE">CodeFormer</a></h2> <h2><a href="https://ghproxy.com/https://github.com/sczhou/CodeFormer/blob/master/LICENSE">CodeFormer</a></h2>
<small>Parts of CodeFormer code had to be copied to be compatible with GFPGAN.</small> <small>Parts of CodeFormer code had to be copied to be compatible with GFPGAN.</small>
<pre> <pre>
S-Lab License 1.0 S-Lab License 1.0
@ -45,7 +45,7 @@ please contact the contributor(s) of the work.
</pre> </pre>
<h2><a href="https://github.com/victorca25/iNNfer/blob/main/LICENSE">ESRGAN</a></h2> <h2><a href="https://ghproxy.com/https://github.com/victorca25/iNNfer/blob/main/LICENSE">ESRGAN</a></h2>
<small>Code for architecture and reading models copied.</small> <small>Code for architecture and reading models copied.</small>
<pre> <pre>
MIT License MIT License
@ -71,7 +71,7 @@ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. SOFTWARE.
</pre> </pre>
<h2><a href="https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE">Real-ESRGAN</a></h2> <h2><a href="https://ghproxy.com/https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE">Real-ESRGAN</a></h2>
<small>Some code is copied to support ESRGAN models.</small> <small>Some code is copied to support ESRGAN models.</small>
<pre> <pre>
BSD 3-Clause License BSD 3-Clause License
@ -105,7 +105,7 @@ OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
</pre> </pre>
<h2><a href="https://github.com/invoke-ai/InvokeAI/blob/main/LICENSE">InvokeAI</a></h2> <h2><a href="https://ghproxy.com/https://github.com/invoke-ai/InvokeAI/blob/main/LICENSE">InvokeAI</a></h2>
<small>Some code for compatibility with OSX is taken from lstein's repository.</small> <small>Some code for compatibility with OSX is taken from lstein's repository.</small>
<pre> <pre>
MIT License MIT License
@ -131,7 +131,7 @@ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. SOFTWARE.
</pre> </pre>
<h2><a href="https://github.com/Hafiidz/latent-diffusion/blob/main/LICENSE">LDSR</a></h2> <h2><a href="https://ghproxy.com/https://github.com/Hafiidz/latent-diffusion/blob/main/LICENSE">LDSR</a></h2>
<small>Code added by contirubtors, most likely copied from this repository.</small> <small>Code added by contirubtors, most likely copied from this repository.</small>
<pre> <pre>
MIT License MIT License
@ -157,7 +157,7 @@ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. SOFTWARE.
</pre> </pre>
<h2><a href="https://github.com/pharmapsychotic/clip-interrogator/blob/main/LICENSE">CLIP Interrogator</a></h2> <h2><a href="https://ghproxy.com/https://github.com/pharmapsychotic/clip-interrogator/blob/main/LICENSE">CLIP Interrogator</a></h2>
<small>Some small amounts of code borrowed and reworked.</small> <small>Some small amounts of code borrowed and reworked.</small>
<pre> <pre>
MIT License MIT License
@ -183,7 +183,7 @@ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. SOFTWARE.
</pre> </pre>
<h2><a href="https://github.com/JingyunLiang/SwinIR/blob/main/LICENSE">SwinIR</a></h2> <h2><a href="https://ghproxy.com/https://github.com/JingyunLiang/SwinIR/blob/main/LICENSE">SwinIR</a></h2>
<small>Code added by contributors, most likely copied from this repository.</small> <small>Code added by contributors, most likely copied from this repository.</small>
<pre> <pre>
@ -390,7 +390,7 @@ SOFTWARE.
limitations under the License. limitations under the License.
</pre> </pre>
<h2><a href="https://github.com/AminRezaei0x443/memory-efficient-attention/blob/main/LICENSE">Memory Efficient Attention</a></h2> <h2><a href="https://ghproxy.com/https://github.com/AminRezaei0x443/memory-efficient-attention/blob/main/LICENSE">Memory Efficient Attention</a></h2>
<small>The sub-quadratic cross attention optimization uses modified code from the Memory Efficient Attention package that Alex Birch optimized for 3D tensors. This license is updated to reflect that.</small> <small>The sub-quadratic cross attention optimization uses modified code from the Memory Efficient Attention package that Alex Birch optimized for 3D tensors. This license is updated to reflect that.</small>
<pre> <pre>
MIT License MIT License
@ -417,7 +417,7 @@ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. SOFTWARE.
</pre> </pre>
<h2><a href="https://github.com/huggingface/diffusers/blob/c7da8fd23359a22d0df2741688b5b4f33c26df21/LICENSE">Scaled Dot Product Attention</a></h2> <h2><a href="https://ghproxy.com/https://github.com/huggingface/diffusers/blob/c7da8fd23359a22d0df2741688b5b4f33c26df21/LICENSE">Scaled Dot Product Attention</a></h2>
<small>Some small amounts of code borrowed and reworked.</small> <small>Some small amounts of code borrowed and reworked.</small>
<pre> <pre>
Copyright 2023 The HuggingFace Team. All rights reserved. Copyright 2023 The HuggingFace Team. All rights reserved.
@ -637,7 +637,7 @@ SOFTWARE.
limitations under the License. limitations under the License.
</pre> </pre>
<h2><a href="https://github.com/explosion/curated-transformers/blob/main/LICENSE">Curated transformers</a></h2> <h2><a href="https://ghproxy.com/https://github.com/explosion/curated-transformers/blob/main/LICENSE">Curated transformers</a></h2>
<small>The MPS workaround for nn.Linear on macOS 13.2.X is based on the MPS workaround for nn.Linear created by danieldk for Curated transformers</small> <small>The MPS workaround for nn.Linear on macOS 13.2.X is based on the MPS workaround for nn.Linear created by danieldk for Curated transformers</small>
<pre> <pre>
The MIT License (MIT) The MIT License (MIT)
@ -662,3 +662,29 @@ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE. THE SOFTWARE.
</pre> </pre>
<h2><a href="https://ghproxy.com/https://github.com/madebyollin/taesd/blob/main/LICENSE">TAESD</a></h2>
<small>Tiny AutoEncoder for Stable Diffusion option for live previews</small>
<pre>
MIT License
Copyright (c) 2023 Ollin Boer Bohan
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
</pre>

View File

@ -1,83 +1,78 @@
let currentWidth = null; let currentWidth = null;
let currentHeight = null; let currentHeight = null;
let arFrameTimeout = setTimeout(function(){},0); let arFrameTimeout = setTimeout(function() {}, 0);
function dimensionChange(e, is_width, is_height){ function dimensionChange(e, is_width, is_height) {
if(is_width){ if (is_width) {
currentWidth = e.target.value*1.0 currentWidth = e.target.value * 1.0;
} }
if(is_height){ if (is_height) {
currentHeight = e.target.value*1.0 currentHeight = e.target.value * 1.0;
} }
var inImg2img = gradioApp().querySelector("#tab_img2img").style.display == "block"; var inImg2img = gradioApp().querySelector("#tab_img2img").style.display == "block";
if(!inImg2img){ if (!inImg2img) {
return; return;
} }
var targetElement = null; var targetElement = null;
var tabIndex = get_tab_index('mode_img2img') var tabIndex = get_tab_index('mode_img2img');
if(tabIndex == 0){ // img2img if (tabIndex == 0) { // img2img
targetElement = gradioApp().querySelector('#img2img_image div[data-testid=image] img'); targetElement = gradioApp().querySelector('#img2img_image div[data-testid=image] img');
} else if(tabIndex == 1){ //Sketch } else if (tabIndex == 1) { //Sketch
targetElement = gradioApp().querySelector('#img2img_sketch div[data-testid=image] img'); targetElement = gradioApp().querySelector('#img2img_sketch div[data-testid=image] img');
} else if(tabIndex == 2){ // Inpaint } else if (tabIndex == 2) { // Inpaint
targetElement = gradioApp().querySelector('#img2maskimg div[data-testid=image] img'); targetElement = gradioApp().querySelector('#img2maskimg div[data-testid=image] img');
} else if(tabIndex == 3){ // Inpaint sketch } else if (tabIndex == 3) { // Inpaint sketch
targetElement = gradioApp().querySelector('#inpaint_sketch div[data-testid=image] img'); targetElement = gradioApp().querySelector('#inpaint_sketch div[data-testid=image] img');
} }
if(targetElement){ if (targetElement) {
var arPreviewRect = gradioApp().querySelector('#imageARPreview'); var arPreviewRect = gradioApp().querySelector('#imageARPreview');
if(!arPreviewRect){ if (!arPreviewRect) {
arPreviewRect = document.createElement('div') arPreviewRect = document.createElement('div');
arPreviewRect.id = "imageARPreview"; arPreviewRect.id = "imageARPreview";
gradioApp().appendChild(arPreviewRect) gradioApp().appendChild(arPreviewRect);
} }
var viewportOffset = targetElement.getBoundingClientRect(); var viewportOffset = targetElement.getBoundingClientRect();
viewportscale = Math.min( targetElement.clientWidth/targetElement.naturalWidth, targetElement.clientHeight/targetElement.naturalHeight ) var viewportscale = Math.min(targetElement.clientWidth / targetElement.naturalWidth, targetElement.clientHeight / targetElement.naturalHeight);
scaledx = targetElement.naturalWidth*viewportscale var scaledx = targetElement.naturalWidth * viewportscale;
scaledy = targetElement.naturalHeight*viewportscale var scaledy = targetElement.naturalHeight * viewportscale;
cleintRectTop = (viewportOffset.top+window.scrollY) var cleintRectTop = (viewportOffset.top + window.scrollY);
cleintRectLeft = (viewportOffset.left+window.scrollX) var cleintRectLeft = (viewportOffset.left + window.scrollX);
cleintRectCentreY = cleintRectTop + (targetElement.clientHeight/2) var cleintRectCentreY = cleintRectTop + (targetElement.clientHeight / 2);
cleintRectCentreX = cleintRectLeft + (targetElement.clientWidth/2) var cleintRectCentreX = cleintRectLeft + (targetElement.clientWidth / 2);
viewRectTop = cleintRectCentreY-(scaledy/2) var arscale = Math.min(scaledx / currentWidth, scaledy / currentHeight);
viewRectLeft = cleintRectCentreX-(scaledx/2) var arscaledx = currentWidth * arscale;
arRectWidth = scaledx var arscaledy = currentHeight * arscale;
arRectHeight = scaledy
arscale = Math.min( arRectWidth/currentWidth, arRectHeight/currentHeight ) var arRectTop = cleintRectCentreY - (arscaledy / 2);
arscaledx = currentWidth*arscale var arRectLeft = cleintRectCentreX - (arscaledx / 2);
arscaledy = currentHeight*arscale var arRectWidth = arscaledx;
var arRectHeight = arscaledy;
arRectTop = cleintRectCentreY-(arscaledy/2) arPreviewRect.style.top = arRectTop + 'px';
arRectLeft = cleintRectCentreX-(arscaledx/2) arPreviewRect.style.left = arRectLeft + 'px';
arRectWidth = arscaledx arPreviewRect.style.width = arRectWidth + 'px';
arRectHeight = arscaledy arPreviewRect.style.height = arRectHeight + 'px';
arPreviewRect.style.top = arRectTop+'px';
arPreviewRect.style.left = arRectLeft+'px';
arPreviewRect.style.width = arRectWidth+'px';
arPreviewRect.style.height = arRectHeight+'px';
clearTimeout(arFrameTimeout); clearTimeout(arFrameTimeout);
arFrameTimeout = setTimeout(function(){ arFrameTimeout = setTimeout(function() {
arPreviewRect.style.display = 'none'; arPreviewRect.style.display = 'none';
},2000); }, 2000);
arPreviewRect.style.display = 'block'; arPreviewRect.style.display = 'block';
@ -86,31 +81,33 @@ function dimensionChange(e, is_width, is_height){
} }
onUiUpdate(function(){ onAfterUiUpdate(function() {
var arPreviewRect = gradioApp().querySelector('#imageARPreview'); var arPreviewRect = gradioApp().querySelector('#imageARPreview');
if(arPreviewRect){ if (arPreviewRect) {
arPreviewRect.style.display = 'none'; arPreviewRect.style.display = 'none';
} }
var tabImg2img = gradioApp().querySelector("#tab_img2img"); var tabImg2img = gradioApp().querySelector("#tab_img2img");
if (tabImg2img) { if (tabImg2img) {
var inImg2img = tabImg2img.style.display == "block"; var inImg2img = tabImg2img.style.display == "block";
if(inImg2img){ if (inImg2img) {
let inputs = gradioApp().querySelectorAll('input'); let inputs = gradioApp().querySelectorAll('input');
inputs.forEach(function(e){ inputs.forEach(function(e) {
var is_width = e.parentElement.id == "img2img_width" var is_width = e.parentElement.id == "img2img_width";
var is_height = e.parentElement.id == "img2img_height" var is_height = e.parentElement.id == "img2img_height";
if((is_width || is_height) && !e.classList.contains('scrollwatch')){ if ((is_width || is_height) && !e.classList.contains('scrollwatch')) {
e.addEventListener('input', function(e){dimensionChange(e, is_width, is_height)} ) e.addEventListener('input', function(e) {
e.classList.add('scrollwatch') dimensionChange(e, is_width, is_height);
});
e.classList.add('scrollwatch');
} }
if(is_width){ if (is_width) {
currentWidth = e.value*1.0 currentWidth = e.value * 1.0;
} }
if(is_height){ if (is_height) {
currentHeight = e.value*1.0 currentHeight = e.value * 1.0;
} }
}) });
} }
} }
}); });

View File

@ -1,49 +1,48 @@
contextMenuInit = function(){ var contextMenuInit = function() {
let eventListenerApplied=false; let eventListenerApplied = false;
let menuSpecs = new Map(); let menuSpecs = new Map();
const uid = function(){ const uid = function() {
return Date.now().toString(36) + Math.random().toString(36).substr(2); return Date.now().toString(36) + Math.random().toString(36).substring(2);
} };
function showContextMenu(event,element,menuEntries){ function showContextMenu(event, element, menuEntries) {
let posx = event.clientX + document.body.scrollLeft + document.documentElement.scrollLeft; let posx = event.clientX + document.body.scrollLeft + document.documentElement.scrollLeft;
let posy = event.clientY + document.body.scrollTop + document.documentElement.scrollTop; let posy = event.clientY + document.body.scrollTop + document.documentElement.scrollTop;
let oldMenu = gradioApp().querySelector('#context-menu') let oldMenu = gradioApp().querySelector('#context-menu');
if(oldMenu){ if (oldMenu) {
oldMenu.remove() oldMenu.remove();
} }
let tabButton = uiCurrentTab let baseStyle = window.getComputedStyle(uiCurrentTab);
let baseStyle = window.getComputedStyle(tabButton)
const contextMenu = document.createElement('nav') const contextMenu = document.createElement('nav');
contextMenu.id = "context-menu" contextMenu.id = "context-menu";
contextMenu.style.background = baseStyle.background contextMenu.style.background = baseStyle.background;
contextMenu.style.color = baseStyle.color contextMenu.style.color = baseStyle.color;
contextMenu.style.fontFamily = baseStyle.fontFamily contextMenu.style.fontFamily = baseStyle.fontFamily;
contextMenu.style.top = posy+'px' contextMenu.style.top = posy + 'px';
contextMenu.style.left = posx+'px' contextMenu.style.left = posx + 'px';
const contextMenuList = document.createElement('ul') const contextMenuList = document.createElement('ul');
contextMenuList.className = 'context-menu-items'; contextMenuList.className = 'context-menu-items';
contextMenu.append(contextMenuList); contextMenu.append(contextMenuList);
menuEntries.forEach(function(entry){ menuEntries.forEach(function(entry) {
let contextMenuEntry = document.createElement('a') let contextMenuEntry = document.createElement('a');
contextMenuEntry.innerHTML = entry['name'] contextMenuEntry.innerHTML = entry['name'];
contextMenuEntry.addEventListener("click", function(e) { contextMenuEntry.addEventListener("click", function() {
entry['func'](); entry['func']();
}) });
contextMenuList.append(contextMenuEntry); contextMenuList.append(contextMenuEntry);
}) });
gradioApp().appendChild(contextMenu) gradioApp().appendChild(contextMenu);
let menuWidth = contextMenu.offsetWidth + 4; let menuWidth = contextMenu.offsetWidth + 4;
let menuHeight = contextMenu.offsetHeight + 4; let menuHeight = contextMenu.offsetHeight + 4;
@ -51,127 +50,127 @@ contextMenuInit = function(){
let windowWidth = window.innerWidth; let windowWidth = window.innerWidth;
let windowHeight = window.innerHeight; let windowHeight = window.innerHeight;
if ( (windowWidth - posx) < menuWidth ) { if ((windowWidth - posx) < menuWidth) {
contextMenu.style.left = windowWidth - menuWidth + "px"; contextMenu.style.left = windowWidth - menuWidth + "px";
} }
if ( (windowHeight - posy) < menuHeight ) { if ((windowHeight - posy) < menuHeight) {
contextMenu.style.top = windowHeight - menuHeight + "px"; contextMenu.style.top = windowHeight - menuHeight + "px";
} }
} }
function appendContextMenuOption(targetElementSelector,entryName,entryFunction){ function appendContextMenuOption(targetElementSelector, entryName, entryFunction) {
currentItems = menuSpecs.get(targetElementSelector) var currentItems = menuSpecs.get(targetElementSelector);
if(!currentItems){ if (!currentItems) {
currentItems = [] currentItems = [];
menuSpecs.set(targetElementSelector,currentItems); menuSpecs.set(targetElementSelector, currentItems);
} }
let newItem = {'id':targetElementSelector+'_'+uid(), let newItem = {
'name':entryName, id: targetElementSelector + '_' + uid(),
'func':entryFunction, name: entryName,
'isNew':true} func: entryFunction,
isNew: true
};
currentItems.push(newItem) currentItems.push(newItem);
return newItem['id'] return newItem['id'];
} }
function removeContextMenuOption(uid){ function removeContextMenuOption(uid) {
menuSpecs.forEach(function(v,k) { menuSpecs.forEach(function(v) {
let index = -1 let index = -1;
v.forEach(function(e,ei){if(e['id']==uid){index=ei}}) v.forEach(function(e, ei) {
if(index>=0){ if (e['id'] == uid) {
index = ei;
}
});
if (index >= 0) {
v.splice(index, 1); v.splice(index, 1);
} }
}) });
} }
function addContextMenuEventListener(){ function addContextMenuEventListener() {
if(eventListenerApplied){ if (eventListenerApplied) {
return; return;
} }
gradioApp().addEventListener("click", function(e) { gradioApp().addEventListener("click", function(e) {
let source = e.composedPath()[0] if (!e.isTrusted) {
if(source.id && source.id.indexOf('check_progress')>-1){ return;
return
} }
let oldMenu = gradioApp().querySelector('#context-menu') let oldMenu = gradioApp().querySelector('#context-menu');
if(oldMenu){ if (oldMenu) {
oldMenu.remove() oldMenu.remove();
} }
}); });
gradioApp().addEventListener("contextmenu", function(e) { gradioApp().addEventListener("contextmenu", function(e) {
let oldMenu = gradioApp().querySelector('#context-menu') let oldMenu = gradioApp().querySelector('#context-menu');
if(oldMenu){ if (oldMenu) {
oldMenu.remove() oldMenu.remove();
} }
menuSpecs.forEach(function(v,k) { menuSpecs.forEach(function(v, k) {
if(e.composedPath()[0].matches(k)){ if (e.composedPath()[0].matches(k)) {
showContextMenu(e,e.composedPath()[0],v) showContextMenu(e, e.composedPath()[0], v);
e.preventDefault() e.preventDefault();
return
} }
})
}); });
eventListenerApplied=true });
eventListenerApplied = true;
} }
return [appendContextMenuOption, removeContextMenuOption, addContextMenuEventListener] return [appendContextMenuOption, removeContextMenuOption, addContextMenuEventListener];
} };
initResponse = contextMenuInit(); var initResponse = contextMenuInit();
appendContextMenuOption = initResponse[0]; var appendContextMenuOption = initResponse[0];
removeContextMenuOption = initResponse[1]; var removeContextMenuOption = initResponse[1];
addContextMenuEventListener = initResponse[2]; var addContextMenuEventListener = initResponse[2];
(function(){ (function() {
//Start example Context Menu Items //Start example Context Menu Items
let generateOnRepeat = function(genbuttonid,interruptbuttonid){ let generateOnRepeat = function(genbuttonid, interruptbuttonid) {
let genbutton = gradioApp().querySelector(genbuttonid); let genbutton = gradioApp().querySelector(genbuttonid);
let interruptbutton = gradioApp().querySelector(interruptbuttonid); let interruptbutton = gradioApp().querySelector(interruptbuttonid);
if(!interruptbutton.offsetParent){ if (!interruptbutton.offsetParent) {
genbutton.click(); genbutton.click();
} }
clearInterval(window.generateOnRepeatInterval) clearInterval(window.generateOnRepeatInterval);
window.generateOnRepeatInterval = setInterval(function(){ window.generateOnRepeatInterval = setInterval(function() {
if(!interruptbutton.offsetParent){ if (!interruptbutton.offsetParent) {
genbutton.click(); genbutton.click();
} }
}, },
500) 500);
} };
appendContextMenuOption('#txt2img_generate','Generate forever',function(){ let generateOnRepeat_txt2img = function() {
generateOnRepeat('#txt2img_generate','#txt2img_interrupt'); generateOnRepeat('#txt2img_generate', '#txt2img_interrupt');
}) };
appendContextMenuOption('#img2img_generate','Generate forever',function(){
generateOnRepeat('#img2img_generate','#img2img_interrupt');
})
let cancelGenerateForever = function(){ let generateOnRepeat_img2img = function() {
clearInterval(window.generateOnRepeatInterval) generateOnRepeat('#img2img_generate', '#img2img_interrupt');
} };
appendContextMenuOption('#txt2img_interrupt','Cancel generate forever',cancelGenerateForever) appendContextMenuOption('#txt2img_generate', 'Generate forever', generateOnRepeat_txt2img);
appendContextMenuOption('#txt2img_generate', 'Cancel generate forever',cancelGenerateForever) appendContextMenuOption('#txt2img_interrupt', 'Generate forever', generateOnRepeat_txt2img);
appendContextMenuOption('#img2img_interrupt','Cancel generate forever',cancelGenerateForever) appendContextMenuOption('#img2img_generate', 'Generate forever', generateOnRepeat_img2img);
appendContextMenuOption('#img2img_generate', 'Cancel generate forever',cancelGenerateForever) appendContextMenuOption('#img2img_interrupt', 'Generate forever', generateOnRepeat_img2img);
let cancelGenerateForever = function() {
clearInterval(window.generateOnRepeatInterval);
};
appendContextMenuOption('#txt2img_interrupt', 'Cancel generate forever', cancelGenerateForever);
appendContextMenuOption('#txt2img_generate', 'Cancel generate forever', cancelGenerateForever);
appendContextMenuOption('#img2img_interrupt', 'Cancel generate forever', cancelGenerateForever);
appendContextMenuOption('#img2img_generate', 'Cancel generate forever', cancelGenerateForever);
appendContextMenuOption('#roll','Roll three',
function(){
let rollbutton = get_uiCurrentTabContent().querySelector('#roll');
setTimeout(function(){rollbutton.click()},100)
setTimeout(function(){rollbutton.click()},200)
setTimeout(function(){rollbutton.click()},300)
}
)
})(); })();
//End example Context Menu Items //End example Context Menu Items
onUiUpdate(function(){ onAfterUiUpdate(addContextMenuEventListener);
addContextMenuEventListener()
});

View File

@ -1,11 +1,11 @@
// allows drag-dropping files into gradio image elements, and also pasting images from clipboard // allows drag-dropping files into gradio image elements, and also pasting images from clipboard
function isValidImageList( files ) { function isValidImageList(files) {
return files && files?.length === 1 && ['image/png', 'image/gif', 'image/jpeg'].includes(files[0].type); return files && files?.length === 1 && ['image/png', 'image/gif', 'image/jpeg'].includes(files[0].type);
} }
function dropReplaceImage( imgWrap, files ) { function dropReplaceImage(imgWrap, files) {
if ( ! isValidImageList( files ) ) { if (!isValidImageList(files)) {
return; return;
} }
@ -14,8 +14,8 @@ function dropReplaceImage( imgWrap, files ) {
imgWrap.querySelector('.modify-upload button + button, .touch-none + div button + button')?.click(); imgWrap.querySelector('.modify-upload button + button, .touch-none + div button + button')?.click();
const callback = () => { const callback = () => {
const fileInput = imgWrap.querySelector('input[type="file"]'); const fileInput = imgWrap.querySelector('input[type="file"]');
if ( fileInput ) { if (fileInput) {
if ( files.length === 0 ) { if (files.length === 0) {
files = new DataTransfer(); files = new DataTransfer();
files.items.add(tmpFile); files.items.add(tmpFile);
fileInput.files = files.files; fileInput.files = files.files;
@ -26,34 +26,49 @@ function dropReplaceImage( imgWrap, files ) {
} }
}; };
if ( imgWrap.closest('#pnginfo_image') ) { if (imgWrap.closest('#pnginfo_image')) {
// special treatment for PNG Info tab, wait for fetch request to finish // special treatment for PNG Info tab, wait for fetch request to finish
const oldFetch = window.fetch; const oldFetch = window.fetch;
window.fetch = async (input, options) => { window.fetch = async(input, options) => {
const response = await oldFetch(input, options); const response = await oldFetch(input, options);
if ( 'api/predict/' === input ) { if ('api/predict/' === input) {
const content = await response.text(); const content = await response.text();
window.fetch = oldFetch; window.fetch = oldFetch;
window.requestAnimationFrame( () => callback() ); window.requestAnimationFrame(() => callback());
return new Response(content, { return new Response(content, {
status: response.status, status: response.status,
statusText: response.statusText, statusText: response.statusText,
headers: response.headers headers: response.headers
}) });
} }
return response; return response;
}; };
} else { } else {
window.requestAnimationFrame( () => callback() ); window.requestAnimationFrame(() => callback());
} }
} }
function eventHasFiles(e) {
if (!e.dataTransfer || !e.dataTransfer.files) return false;
if (e.dataTransfer.files.length > 0) return true;
if (e.dataTransfer.items.length > 0 && e.dataTransfer.items[0].kind == "file") return true;
return false;
}
function dragDropTargetIsPrompt(target) {
if (target?.placeholder && target?.placeholder.indexOf("Prompt") >= 0) return true;
if (target?.parentNode?.parentNode?.className?.indexOf("prompt") > 0) return true;
return false;
}
window.document.addEventListener('dragover', e => { window.document.addEventListener('dragover', e => {
const target = e.composedPath()[0]; const target = e.composedPath()[0];
const imgWrap = target.closest('[data-testid="image"]'); if (!eventHasFiles(e)) return;
if ( !imgWrap && target.placeholder && target.placeholder.indexOf("Prompt") == -1) {
return; var targetImage = target.closest('[data-testid="image"]');
} if (!dragDropTargetIsPrompt(target) && !targetImage) return;
e.stopPropagation(); e.stopPropagation();
e.preventDefault(); e.preventDefault();
e.dataTransfer.dropEffect = 'copy'; e.dataTransfer.dropEffect = 'copy';
@ -61,28 +76,45 @@ window.document.addEventListener('dragover', e => {
window.document.addEventListener('drop', e => { window.document.addEventListener('drop', e => {
const target = e.composedPath()[0]; const target = e.composedPath()[0];
if (target.placeholder.indexOf("Prompt") == -1) { if (!eventHasFiles(e)) return;
return;
if (dragDropTargetIsPrompt(target)) {
e.stopPropagation();
e.preventDefault();
let prompt_target = get_tab_index('tabs') == 1 ? "img2img_prompt_image" : "txt2img_prompt_image";
const imgParent = gradioApp().getElementById(prompt_target);
const files = e.dataTransfer.files;
const fileInput = imgParent.querySelector('input[type="file"]');
if (fileInput) {
fileInput.files = files;
fileInput.dispatchEvent(new Event('change'));
} }
const imgWrap = target.closest('[data-testid="image"]');
if ( !imgWrap ) {
return;
} }
var targetImage = target.closest('[data-testid="image"]');
if (targetImage) {
e.stopPropagation(); e.stopPropagation();
e.preventDefault(); e.preventDefault();
const files = e.dataTransfer.files; const files = e.dataTransfer.files;
dropReplaceImage( imgWrap, files ); dropReplaceImage(targetImage, files);
return;
}
}); });
window.addEventListener('paste', e => { window.addEventListener('paste', e => {
const files = e.clipboardData.files; const files = e.clipboardData.files;
if ( ! isValidImageList( files ) ) { if (!isValidImageList(files)) {
return; return;
} }
const visibleImageFields = [...gradioApp().querySelectorAll('[data-testid="image"]')] const visibleImageFields = [...gradioApp().querySelectorAll('[data-testid="image"]')]
.filter(el => uiElementIsVisible(el)); .filter(el => uiElementIsVisible(el))
if ( ! visibleImageFields.length ) { .sort((a, b) => uiElementInSight(b) - uiElementInSight(a));
if (!visibleImageFields.length) {
return; return;
} }
@ -93,5 +125,6 @@ window.addEventListener('paste', e => {
firstFreeImageField ? firstFreeImageField ?
firstFreeImageField : firstFreeImageField :
visibleImageFields[visibleImageFields.length - 1] visibleImageFields[visibleImageFields.length - 1]
, files ); , files
);
}); });

View File

@ -1,17 +1,17 @@
function keyupEditAttention(event){ function keyupEditAttention(event) {
let target = event.originalTarget || event.composedPath()[0]; let target = event.originalTarget || event.composedPath()[0];
if (! target.matches("[id*='_toprow'] [id*='_prompt'] textarea")) return; if (!target.matches("*:is([id*='_toprow'] [id*='_prompt'], .prompt) textarea")) return;
if (! (event.metaKey || event.ctrlKey)) return; if (!(event.metaKey || event.ctrlKey)) return;
let isPlus = event.key == "ArrowUp" let isPlus = event.key == "ArrowUp";
let isMinus = event.key == "ArrowDown" let isMinus = event.key == "ArrowDown";
if (!isPlus && !isMinus) return; if (!isPlus && !isMinus) return;
let selectionStart = target.selectionStart; let selectionStart = target.selectionStart;
let selectionEnd = target.selectionEnd; let selectionEnd = target.selectionEnd;
let text = target.value; let text = target.value;
function selectCurrentParenthesisBlock(OPEN, CLOSE){ function selectCurrentParenthesisBlock(OPEN, CLOSE) {
if (selectionStart !== selectionEnd) return false; if (selectionStart !== selectionEnd) return false;
// Find opening parenthesis around current cursor // Find opening parenthesis around current cursor
@ -44,27 +44,45 @@ function keyupEditAttention(event){
return true; return true;
} }
// If the user hasn't selected anything, let's select their current parenthesis block function selectCurrentWord() {
if(! selectCurrentParenthesisBlock('<', '>')){ if (selectionStart !== selectionEnd) return false;
selectCurrentParenthesisBlock('(', ')') const delimiters = opts.keyedit_delimiters + " \r\n\t";
// seek backward until to find beggining
while (!delimiters.includes(text[selectionStart - 1]) && selectionStart > 0) {
selectionStart--;
}
// seek forward to find end
while (!delimiters.includes(text[selectionEnd]) && selectionEnd < text.length) {
selectionEnd++;
}
target.setSelectionRange(selectionStart, selectionEnd);
return true;
}
// If the user hasn't selected anything, let's select their current parenthesis block or word
if (!selectCurrentParenthesisBlock('<', '>') && !selectCurrentParenthesisBlock('(', ')')) {
selectCurrentWord();
} }
event.preventDefault(); event.preventDefault();
closeCharacter = ')' var closeCharacter = ')';
delta = opts.keyedit_precision_attention var delta = opts.keyedit_precision_attention;
if (selectionStart > 0 && text[selectionStart - 1] == '<'){ if (selectionStart > 0 && text[selectionStart - 1] == '<') {
closeCharacter = '>' closeCharacter = '>';
delta = opts.keyedit_precision_extra delta = opts.keyedit_precision_extra;
} else if (selectionStart == 0 || text[selectionStart - 1] != "(") { } else if (selectionStart == 0 || text[selectionStart - 1] != "(") {
// do not include spaces at the end // do not include spaces at the end
while(selectionEnd > selectionStart && text[selectionEnd-1] == ' '){ while (selectionEnd > selectionStart && text[selectionEnd - 1] == ' ') {
selectionEnd -= 1; selectionEnd -= 1;
} }
if(selectionStart == selectionEnd){ if (selectionStart == selectionEnd) {
return return;
} }
text = text.slice(0, selectionStart) + "(" + text.slice(selectionStart, selectionEnd) + ":1.0)" + text.slice(selectionEnd); text = text.slice(0, selectionStart) + "(" + text.slice(selectionStart, selectionEnd) + ":1.0)" + text.slice(selectionEnd);
@ -73,22 +91,29 @@ function keyupEditAttention(event){
selectionEnd += 1; selectionEnd += 1;
} }
end = text.slice(selectionEnd + 1).indexOf(closeCharacter) + 1; var end = text.slice(selectionEnd + 1).indexOf(closeCharacter) + 1;
weight = parseFloat(text.slice(selectionEnd + 1, selectionEnd + 1 + end)); var weight = parseFloat(text.slice(selectionEnd + 1, selectionEnd + 1 + end));
if (isNaN(weight)) return; if (isNaN(weight)) return;
weight += isPlus ? delta : -delta; weight += isPlus ? delta : -delta;
weight = parseFloat(weight.toPrecision(12)); weight = parseFloat(weight.toPrecision(12));
if(String(weight).length == 1) weight += ".0" if (String(weight).length == 1) weight += ".0";
text = text.slice(0, selectionEnd + 1) + weight + text.slice(selectionEnd + 1 + end - 1); if (closeCharacter == ')' && weight == 1) {
var endParenPos = text.substring(selectionEnd).indexOf(')');
text = text.slice(0, selectionStart - 1) + text.slice(selectionStart, selectionEnd) + text.slice(selectionEnd + endParenPos + 1);
selectionStart--;
selectionEnd--;
} else {
text = text.slice(0, selectionEnd + 1) + weight + text.slice(selectionEnd + end);
}
target.focus(); target.focus();
target.value = text; target.value = text;
target.selectionStart = selectionStart; target.selectionStart = selectionStart;
target.selectionEnd = selectionEnd; target.selectionEnd = selectionEnd;
updateInput(target) updateInput(target);
} }
addEventListener('keydown', (event) => { addEventListener('keydown', (event) => {

41
javascript/edit-order.js Normal file
View File

@ -0,0 +1,41 @@
/* alt+left/right moves text in prompt */
function keyupEditOrder(event) {
if (!opts.keyedit_move) return;
let target = event.originalTarget || event.composedPath()[0];
if (!target.matches("*:is([id*='_toprow'] [id*='_prompt'], .prompt) textarea")) return;
if (!event.altKey) return;
let isLeft = event.key == "ArrowLeft";
let isRight = event.key == "ArrowRight";
if (!isLeft && !isRight) return;
event.preventDefault();
let selectionStart = target.selectionStart;
let selectionEnd = target.selectionEnd;
let text = target.value;
let items = text.split(",");
let indexStart = (text.slice(0, selectionStart).match(/,/g) || []).length;
let indexEnd = (text.slice(0, selectionEnd).match(/,/g) || []).length;
let range = indexEnd - indexStart + 1;
if (isLeft && indexStart > 0) {
items.splice(indexStart - 1, 0, ...items.splice(indexStart, range));
target.value = items.join();
target.selectionStart = items.slice(0, indexStart - 1).join().length + (indexStart == 1 ? 0 : 1);
target.selectionEnd = items.slice(0, indexEnd).join().length;
} else if (isRight && indexEnd < items.length - 1) {
items.splice(indexStart + 1, 0, ...items.splice(indexStart, range));
target.value = items.join();
target.selectionStart = items.slice(0, indexStart + 1).join().length + 1;
target.selectionEnd = items.slice(0, indexEnd + 2).join().length;
}
event.preventDefault();
updateInput(target);
}
addEventListener('keydown', (event) => {
keyupEditOrder(event);
});

View File

@ -1,49 +1,92 @@
function extensions_apply(_, _, disable_all){ function extensions_apply(_disabled_list, _update_list, disable_all) {
var disable = [] var disable = [];
var update = [] var update = [];
gradioApp().querySelectorAll('#extensions input[type="checkbox"]').forEach(function(x){ gradioApp().querySelectorAll('#extensions input[type="checkbox"]').forEach(function(x) {
if(x.name.startsWith("enable_") && ! x.checked) if (x.name.startsWith("enable_") && !x.checked) {
disable.push(x.name.substr(7)) disable.push(x.name.substring(7));
}
if(x.name.startsWith("update_") && x.checked) if (x.name.startsWith("update_") && x.checked) {
update.push(x.name.substr(7)) update.push(x.name.substring(7));
}) }
});
restart_reload() restart_reload();
return [JSON.stringify(disable), JSON.stringify(update), disable_all] return [JSON.stringify(disable), JSON.stringify(update), disable_all];
} }
function extensions_check(_, _){ function extensions_check() {
var disable = [] var disable = [];
gradioApp().querySelectorAll('#extensions input[type="checkbox"]').forEach(function(x){ gradioApp().querySelectorAll('#extensions input[type="checkbox"]').forEach(function(x) {
if(x.name.startsWith("enable_") && ! x.checked) if (x.name.startsWith("enable_") && !x.checked) {
disable.push(x.name.substr(7)) disable.push(x.name.substring(7));
}) }
});
gradioApp().querySelectorAll('#extensions .extension_status').forEach(function(x){ gradioApp().querySelectorAll('#extensions .extension_status').forEach(function(x) {
x.innerHTML = "Loading..." x.innerHTML = "Loading...";
}) });
var id = randomId() var id = randomId();
requestProgress(id, gradioApp().getElementById('extensions_installed_top'), null, function(){ requestProgress(id, gradioApp().getElementById('extensions_installed_top'), null, function() {
}) });
return [id, JSON.stringify(disable)] return [id, JSON.stringify(disable)];
} }
function install_extension_from_index(button, url){ function install_extension_from_index(button, url) {
button.disabled = "disabled" button.disabled = "disabled";
button.value = "Installing..." button.value = "Installing...";
textarea = gradioApp().querySelector('#extension_to_install textarea') var textarea = gradioApp().querySelector('#extension_to_install textarea');
textarea.value = url textarea.value = url;
updateInput(textarea) updateInput(textarea);
gradioApp().querySelector('#install_extension_button').click() gradioApp().querySelector('#install_extension_button').click();
}
function config_state_confirm_restore(_, config_state_name, config_restore_type) {
if (config_state_name == "Current") {
return [false, config_state_name, config_restore_type];
}
let restored = "";
if (config_restore_type == "extensions") {
restored = "all saved extension versions";
} else if (config_restore_type == "webui") {
restored = "the webui version";
} else {
restored = "the webui version and all saved extension versions";
}
let confirmed = confirm("Are you sure you want to restore from this state?\nThis will reset " + restored + ".");
if (confirmed) {
restart_reload();
gradioApp().querySelectorAll('#extensions .extension_status').forEach(function(x) {
x.innerHTML = "Loading...";
});
}
return [confirmed, config_state_name, config_restore_type];
}
function toggle_all_extensions(event) {
gradioApp().querySelectorAll('#extensions .extension_toggle').forEach(function(checkbox_el) {
checkbox_el.checked = event.target.checked;
});
}
function toggle_extension() {
let all_extensions_toggled = true;
for (const checkbox_el of gradioApp().querySelectorAll('#extensions .extension_toggle')) {
if (!checkbox_el.checked) {
all_extensions_toggled = false;
break;
}
}
gradioApp().querySelector('#extensions .all_extensions_toggle').checked = all_extensions_toggled;
} }

View File

@ -1,129 +1,219 @@
function setupExtraNetworksForTab(tabname) {
gradioApp().querySelector('#' + tabname + '_extra_tabs').classList.add('extra-networks');
function setupExtraNetworksForTab(tabname){ var tabs = gradioApp().querySelector('#' + tabname + '_extra_tabs > div');
gradioApp().querySelector('#'+tabname+'_extra_tabs').classList.add('extra-networks') var search = gradioApp().querySelector('#' + tabname + '_extra_search textarea');
var sort = gradioApp().getElementById(tabname + '_extra_sort');
var sortOrder = gradioApp().getElementById(tabname + '_extra_sortorder');
var refresh = gradioApp().getElementById(tabname + '_extra_refresh');
var tabs = gradioApp().querySelector('#'+tabname+'_extra_tabs > div') search.classList.add('search');
var search = gradioApp().querySelector('#'+tabname+'_extra_search textarea') sort.classList.add('sort');
var refresh = gradioApp().getElementById(tabname+'_extra_refresh') sortOrder.classList.add('sortorder');
sort.dataset.sortkey = 'sortDefault';
tabs.appendChild(search);
tabs.appendChild(sort);
tabs.appendChild(sortOrder);
tabs.appendChild(refresh);
search.classList.add('search') var applyFilter = function() {
tabs.appendChild(search) var searchTerm = search.value.toLowerCase();
tabs.appendChild(refresh)
search.addEventListener("input", function(evt){ gradioApp().querySelectorAll('#' + tabname + '_extra_tabs div.card').forEach(function(elem) {
searchTerm = search.value.toLowerCase() var searchOnly = elem.querySelector('.search_only');
var text = elem.querySelector('.name').textContent.toLowerCase() + " " + elem.querySelector('.search_term').textContent.toLowerCase();
gradioApp().querySelectorAll('#'+tabname+'_extra_tabs div.card').forEach(function(elem){ var visible = text.indexOf(searchTerm) != -1;
text = elem.querySelector('.name').textContent.toLowerCase() + " " + elem.querySelector('.search_term').textContent.toLowerCase()
elem.style.display = text.indexOf(searchTerm) == -1 ? "none" : ""
})
});
}
var activePromptTextarea = {}; if (searchOnly && searchTerm.length < 4) {
visible = false;
function setupExtraNetworks(){
setupExtraNetworksForTab('txt2img')
setupExtraNetworksForTab('img2img')
function registerPrompt(tabname, id){
var textarea = gradioApp().querySelector("#" + id + " > label > textarea");
if (! activePromptTextarea[tabname]){
activePromptTextarea[tabname] = textarea
} }
textarea.addEventListener("focus", function(){ elem.style.display = visible ? "" : "none";
});
};
var applySort = function() {
var reverse = sortOrder.classList.contains("sortReverse");
var sortKey = sort.querySelector("input").value.toLowerCase().replace("sort", "").replaceAll(" ", "_").replace(/_+$/, "").trim();
sortKey = sortKey ? "sort" + sortKey.charAt(0).toUpperCase() + sortKey.slice(1) : "";
var sortKeyStore = sortKey ? sortKey + (reverse ? "Reverse" : "") : "";
if (!sortKey || sortKeyStore == sort.dataset.sortkey) {
return;
}
sort.dataset.sortkey = sortKeyStore;
var cards = gradioApp().querySelectorAll('#' + tabname + '_extra_tabs div.card');
cards.forEach(function(card) {
card.originalParentElement = card.parentElement;
});
var sortedCards = Array.from(cards);
sortedCards.sort(function(cardA, cardB) {
var a = cardA.dataset[sortKey];
var b = cardB.dataset[sortKey];
if (!isNaN(a) && !isNaN(b)) {
return parseInt(a) - parseInt(b);
}
return (a < b ? -1 : (a > b ? 1 : 0));
});
if (reverse) {
sortedCards.reverse();
}
cards.forEach(function(card) {
card.remove();
});
sortedCards.forEach(function(card) {
card.originalParentElement.appendChild(card);
});
};
search.addEventListener("input", applyFilter);
applyFilter();
["change", "blur", "click"].forEach(function(evt) {
sort.querySelector("input").addEventListener(evt, applySort);
});
sortOrder.addEventListener("click", function() {
sortOrder.classList.toggle("sortReverse");
applySort();
});
extraNetworksApplyFilter[tabname] = applyFilter;
}
function applyExtraNetworkFilter(tabname) {
setTimeout(extraNetworksApplyFilter[tabname], 1);
}
var extraNetworksApplyFilter = {};
var activePromptTextarea = {};
function setupExtraNetworks() {
setupExtraNetworksForTab('txt2img');
setupExtraNetworksForTab('img2img');
function registerPrompt(tabname, id) {
var textarea = gradioApp().querySelector("#" + id + " > label > textarea");
if (!activePromptTextarea[tabname]) {
activePromptTextarea[tabname] = textarea;
}
textarea.addEventListener("focus", function() {
activePromptTextarea[tabname] = textarea; activePromptTextarea[tabname] = textarea;
}); });
} }
registerPrompt('txt2img', 'txt2img_prompt') registerPrompt('txt2img', 'txt2img_prompt');
registerPrompt('txt2img', 'txt2img_neg_prompt') registerPrompt('txt2img', 'txt2img_neg_prompt');
registerPrompt('img2img', 'img2img_prompt') registerPrompt('img2img', 'img2img_prompt');
registerPrompt('img2img', 'img2img_neg_prompt') registerPrompt('img2img', 'img2img_neg_prompt');
} }
onUiLoaded(setupExtraNetworks) onUiLoaded(setupExtraNetworks);
var re_extranet = /<([^:]+:[^:]+):[\d\.]+>/; var re_extranet = /<([^:]+:[^:]+):[\d.]+>(.*)/;
var re_extranet_g = /\s+<([^:]+:[^:]+):[\d\.]+>/g; var re_extranet_g = /\s+<([^:]+:[^:]+):[\d.]+>/g;
function tryToRemoveExtraNetworkFromPrompt(textarea, text){ function tryToRemoveExtraNetworkFromPrompt(textarea, text) {
var m = text.match(re_extranet) var m = text.match(re_extranet);
if(! m) return false var replaced = false;
var newTextareaText;
var partToSearch = m[1] if (m) {
var replaced = false var extraTextAfterNet = m[2];
var newTextareaText = textarea.value.replaceAll(re_extranet_g, function(found, index){ var partToSearch = m[1];
var foundAtPosition = -1;
newTextareaText = textarea.value.replaceAll(re_extranet_g, function(found, net, pos) {
m = found.match(re_extranet); m = found.match(re_extranet);
if(m[1] == partToSearch){ if (m[1] == partToSearch) {
replaced = true; replaced = true;
return "" foundAtPosition = pos;
return "";
} }
return found; return found;
}) });
if(replaced){ if (foundAtPosition >= 0 && newTextareaText.substr(foundAtPosition, extraTextAfterNet.length) == extraTextAfterNet) {
textarea.value = newTextareaText newTextareaText = newTextareaText.substr(0, foundAtPosition) + newTextareaText.substr(foundAtPosition + extraTextAfterNet.length);
}
} else {
newTextareaText = textarea.value.replaceAll(new RegExp(text, "g"), function(found) {
if (found == text) {
replaced = true;
return "";
}
return found;
});
}
if (replaced) {
textarea.value = newTextareaText;
return true; return true;
} }
return false return false;
} }
function cardClicked(tabname, textToAdd, allowNegativePrompt){ function cardClicked(tabname, textToAdd, allowNegativePrompt) {
var textarea = allowNegativePrompt ? activePromptTextarea[tabname] : gradioApp().querySelector("#" + tabname + "_prompt > label > textarea") var textarea = allowNegativePrompt ? activePromptTextarea[tabname] : gradioApp().querySelector("#" + tabname + "_prompt > label > textarea");
if(! tryToRemoveExtraNetworkFromPrompt(textarea, textToAdd)){ if (!tryToRemoveExtraNetworkFromPrompt(textarea, textToAdd)) {
textarea.value = textarea.value + opts.extra_networks_add_text_separator + textToAdd textarea.value = textarea.value + opts.extra_networks_add_text_separator + textToAdd;
} }
updateInput(textarea) updateInput(textarea);
} }
function saveCardPreview(event, tabname, filename){ function saveCardPreview(event, tabname, filename) {
var textarea = gradioApp().querySelector("#" + tabname + '_preview_filename > label > textarea') var textarea = gradioApp().querySelector("#" + tabname + '_preview_filename > label > textarea');
var button = gradioApp().getElementById(tabname + '_save_preview') var button = gradioApp().getElementById(tabname + '_save_preview');
textarea.value = filename textarea.value = filename;
updateInput(textarea) updateInput(textarea);
button.click() button.click();
event.stopPropagation() event.stopPropagation();
event.preventDefault() event.preventDefault();
} }
function extraNetworksSearchButton(tabs_id, event){ function extraNetworksSearchButton(tabs_id, event) {
searchTextarea = gradioApp().querySelector("#" + tabs_id + ' > div > textarea') var searchTextarea = gradioApp().querySelector("#" + tabs_id + ' > div > textarea');
button = event.target var button = event.target;
text = button.classList.contains("search-all") ? "" : button.textContent.trim() var text = button.classList.contains("search-all") ? "" : button.textContent.trim();
searchTextarea.value = text searchTextarea.value = text;
updateInput(searchTextarea) updateInput(searchTextarea);
} }
var globalPopup = null; var globalPopup = null;
var globalPopupInner = null; var globalPopupInner = null;
function popup(contents){ function closePopup() {
if(! globalPopup){ if (!globalPopup) return;
globalPopup = document.createElement('div')
globalPopup.onclick = function(){ globalPopup.style.display = "none"; }; globalPopup.style.display = "none";
}
function popup(contents) {
if (!globalPopup) {
globalPopup = document.createElement('div');
globalPopup.onclick = closePopup;
globalPopup.classList.add('global-popup'); globalPopup.classList.add('global-popup');
var close = document.createElement('div') var close = document.createElement('div');
close.classList.add('global-popup-close'); close.classList.add('global-popup-close');
close.onclick = function(){ globalPopup.style.display = "none"; }; close.onclick = closePopup;
close.title = "Close"; close.title = "Close";
globalPopup.appendChild(close) globalPopup.appendChild(close);
globalPopupInner = document.createElement('div') globalPopupInner = document.createElement('div');
globalPopupInner.onclick = function(event){ event.stopPropagation(); return false; }; globalPopupInner.onclick = function(event) {
event.stopPropagation(); return false;
};
globalPopupInner.classList.add('global-popup-inner'); globalPopupInner.classList.add('global-popup-inner');
globalPopup.appendChild(globalPopupInner) globalPopup.appendChild(globalPopupInner);
gradioApp().appendChild(globalPopup); gradioApp().querySelector('.main').appendChild(globalPopup);
} }
globalPopupInner.innerHTML = ''; globalPopupInner.innerHTML = '';
@ -132,31 +222,33 @@ function popup(contents){
globalPopup.style.display = "flex"; globalPopup.style.display = "flex";
} }
function extraNetworksShowMetadata(text){ function extraNetworksShowMetadata(text) {
elem = document.createElement('pre') var elem = document.createElement('pre');
elem.classList.add('popup-metadata'); elem.classList.add('popup-metadata');
elem.textContent = text; elem.textContent = text;
popup(elem); popup(elem);
} }
function requestGet(url, data, handler, errorHandler){ function requestGet(url, data, handler, errorHandler) {
var xhr = new XMLHttpRequest(); var xhr = new XMLHttpRequest();
var args = Object.keys(data).map(function(k){ return encodeURIComponent(k) + '=' + encodeURIComponent(data[k]) }).join('&') var args = Object.keys(data).map(function(k) {
return encodeURIComponent(k) + '=' + encodeURIComponent(data[k]);
}).join('&');
xhr.open("GET", url + "?" + args, true); xhr.open("GET", url + "?" + args, true);
xhr.onreadystatechange = function () { xhr.onreadystatechange = function() {
if (xhr.readyState === 4) { if (xhr.readyState === 4) {
if (xhr.status === 200) { if (xhr.status === 200) {
try { try {
var js = JSON.parse(xhr.responseText); var js = JSON.parse(xhr.responseText);
handler(js) handler(js);
} catch (error) { } catch (error) {
console.error(error); console.error(error);
errorHandler() errorHandler();
} }
} else{ } else {
errorHandler() errorHandler();
} }
} }
}; };
@ -164,16 +256,58 @@ function requestGet(url, data, handler, errorHandler){
xhr.send(js); xhr.send(js);
} }
function extraNetworksRequestMetadata(event, extraPage, cardName){ function extraNetworksRequestMetadata(event, extraPage, cardName) {
showError = function(){ extraNetworksShowMetadata("there was an error getting metadata"); } var showError = function() {
extraNetworksShowMetadata("there was an error getting metadata");
};
requestGet("./sd_extra_networks/metadata", {"page": extraPage, "item": cardName}, function(data){ requestGet("./sd_extra_networks/metadata", {page: extraPage, item: cardName}, function(data) {
if(data && data.metadata){ if (data && data.metadata) {
extraNetworksShowMetadata(data.metadata) extraNetworksShowMetadata(data.metadata);
} else{ } else {
showError() showError();
} }
}, showError) }, showError);
event.stopPropagation() event.stopPropagation();
}
var extraPageUserMetadataEditors = {};
function extraNetworksEditUserMetadata(event, tabname, extraPage, cardName) {
var id = tabname + '_' + extraPage + '_edit_user_metadata';
var editor = extraPageUserMetadataEditors[id];
if (!editor) {
editor = {};
editor.page = gradioApp().getElementById(id);
editor.nameTextarea = gradioApp().querySelector("#" + id + "_name" + ' textarea');
editor.button = gradioApp().querySelector("#" + id + "_button");
extraPageUserMetadataEditors[id] = editor;
}
editor.nameTextarea.value = cardName;
updateInput(editor.nameTextarea);
editor.button.click();
popup(editor.page);
event.stopPropagation();
}
function extraNetworksRefreshSingleCard(page, tabname, name) {
requestGet("./sd_extra_networks/get-single-card", {page: page, tabname: tabname, name: name}, function(data) {
if (data && data.html) {
var card = gradioApp().querySelector('.card[data-name=' + JSON.stringify(name) + ']'); // likely using the wrong stringify function
var newDiv = document.createElement('DIV');
newDiv.innerHTML = data.html;
var newCard = newDiv.firstElementChild;
newCard.style = '';
card.parentElement.insertBefore(newCard, card);
card.parentElement.removeChild(card);
}
});
} }

View File

@ -1,33 +1,35 @@
// attaches listeners to the txt2img and img2img galleries to update displayed generation param text when the image changes // attaches listeners to the txt2img and img2img galleries to update displayed generation param text when the image changes
let txt2img_gallery, img2img_gallery, modal = undefined; let txt2img_gallery, img2img_gallery, modal = undefined;
onUiUpdate(function(){ onAfterUiUpdate(function() {
if (!txt2img_gallery) { if (!txt2img_gallery) {
txt2img_gallery = attachGalleryListeners("txt2img") txt2img_gallery = attachGalleryListeners("txt2img");
} }
if (!img2img_gallery) { if (!img2img_gallery) {
img2img_gallery = attachGalleryListeners("img2img") img2img_gallery = attachGalleryListeners("img2img");
} }
if (!modal) { if (!modal) {
modal = gradioApp().getElementById('lightboxModal') modal = gradioApp().getElementById('lightboxModal');
modalObserver.observe(modal, { attributes : true, attributeFilter : ['style'] }); modalObserver.observe(modal, {attributes: true, attributeFilter: ['style']});
} }
}); });
let modalObserver = new MutationObserver(function(mutations) { let modalObserver = new MutationObserver(function(mutations) {
mutations.forEach(function(mutationRecord) { mutations.forEach(function(mutationRecord) {
let selectedTab = gradioApp().querySelector('#tabs div button.bg-white')?.innerText let selectedTab = gradioApp().querySelector('#tabs div button.selected')?.innerText;
if (mutationRecord.target.style.display === 'none' && selectedTab === 'txt2img' || selectedTab === 'img2img') if (mutationRecord.target.style.display === 'none' && (selectedTab === 'txt2img' || selectedTab === 'img2img')) {
gradioApp().getElementById(selectedTab+"_generation_info_button").click() gradioApp().getElementById(selectedTab + "_generation_info_button")?.click();
}
}); });
}); });
function attachGalleryListeners(tab_name) { function attachGalleryListeners(tab_name) {
gallery = gradioApp().querySelector('#'+tab_name+'_gallery') var gallery = gradioApp().querySelector('#' + tab_name + '_gallery');
gallery?.addEventListener('click', () => gradioApp().getElementById(tab_name+"_generation_info_button").click()); gallery?.addEventListener('click', () => gradioApp().getElementById(tab_name + "_generation_info_button").click());
gallery?.addEventListener('keydown', (e) => { gallery?.addEventListener('keydown', (e) => {
if (e.keyCode == 37 || e.keyCode == 39) // left or right arrow if (e.keyCode == 37 || e.keyCode == 39) { // left or right arrow
gradioApp().getElementById(tab_name+"_generation_info_button").click() gradioApp().getElementById(tab_name + "_generation_info_button").click();
}
}); });
return gallery; return gallery;
} }

View File

@ -1,6 +1,6 @@
// mouseover tooltips for various UI elements // mouseover tooltips for various UI elements
titles = { var titles = {
"Sampling steps": "How many times to improve the generated image iteratively; higher values take longer; very low values can produce bad results", "Sampling steps": "How many times to improve the generated image iteratively; higher values take longer; very low values can produce bad results",
"Sampling method": "Which algorithm to use to produce the image", "Sampling method": "Which algorithm to use to produce the image",
"GFPGAN": "Restore low quality faces using GFPGAN neural network", "GFPGAN": "Restore low quality faces using GFPGAN neural network",
@ -9,12 +9,13 @@ titles = {
"UniPC": "Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models", "UniPC": "Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models",
"DPM adaptive": "Ignores step count - uses a number of steps determined by the CFG and resolution", "DPM adaptive": "Ignores step count - uses a number of steps determined by the CFG and resolution",
"\u{1F4D0}": "Auto detect size from img2img",
"Batch count": "How many batches of images to create (has no impact on generation performance or VRAM usage)", "Batch count": "How many batches of images to create (has no impact on generation performance or VRAM usage)",
"Batch size": "How many image to create in a single batch (increases generation performance at cost of higher VRAM usage)", "Batch size": "How many image to create in a single batch (increases generation performance at cost of higher VRAM usage)",
"CFG Scale": "Classifier Free Guidance Scale - how strongly the image should conform to prompt - lower values produce more creative results", "CFG Scale": "Classifier Free Guidance Scale - how strongly the image should conform to prompt - lower values produce more creative results",
"Seed": "A value that determines the output of random number generator - if you create an image with same parameters and seed as another image, you'll get the same result", "Seed": "A value that determines the output of random number generator - if you create an image with same parameters and seed as another image, you'll get the same result",
"\u{1f3b2}\ufe0f": "Set seed to -1, which will cause a new random number to be used every time", "\u{1f3b2}\ufe0f": "Set seed to -1, which will cause a new random number to be used every time",
"\u267b\ufe0f": "Reuse seed from last generation, mostly useful if it was randomed", "\u267b\ufe0f": "Reuse seed from last generation, mostly useful if it was randomized",
"\u2199\ufe0f": "Read generation parameters from prompt or last generation if prompt is empty into user interface.", "\u2199\ufe0f": "Read generation parameters from prompt or last generation if prompt is empty into user interface.",
"\u{1f4c2}": "Open images output directory", "\u{1f4c2}": "Open images output directory",
"\u{1f4be}": "Save style", "\u{1f4be}": "Save style",
@ -22,6 +23,7 @@ titles = {
"\u{1f4cb}": "Apply selected styles to current prompt", "\u{1f4cb}": "Apply selected styles to current prompt",
"\u{1f4d2}": "Paste available values into the field", "\u{1f4d2}": "Paste available values into the field",
"\u{1f3b4}": "Show/hide extra networks", "\u{1f3b4}": "Show/hide extra networks",
"\u{1f300}": "Restore progress",
"Inpaint a part of image": "Draw a mask over an image, and the script will regenerate the masked area with content according to prompt", "Inpaint a part of image": "Draw a mask over an image, and the script will regenerate the masked area with content according to prompt",
"SD upscale": "Upscale image normally, split result into tiles, improve each tile using img2img, merge whole image back", "SD upscale": "Upscale image normally, split result into tiles, improve each tile using img2img, merge whole image back",
@ -65,8 +67,8 @@ titles = {
"Interrogate": "Reconstruct prompt from existing image and put it into the prompt field.", "Interrogate": "Reconstruct prompt from existing image and put it into the prompt field.",
"Images filename pattern": "Use following tags to define how filenames for images are chosen: [steps], [cfg], [prompt_hash], [prompt], [prompt_no_styles], [prompt_spaces], [width], [height], [styles], [sampler], [seed], [model_hash], [model_name], [prompt_words], [date], [datetime], [datetime<Format>], [datetime<Format><Time Zone>], [job_timestamp]; leave empty for default.", "Images filename pattern": "Use tags like [seed] and [date] to define how filenames for images are chosen. Leave empty for default.",
"Directory name pattern": "Use following tags to define how subdirectories for images and grids are chosen: [steps], [cfg],[prompt_hash], [prompt], [prompt_no_styles], [prompt_spaces], [width], [height], [styles], [sampler], [seed], [model_hash], [model_name], [prompt_words], [date], [datetime], [datetime<Format>], [datetime<Format><Time Zone>], [job_timestamp]; leave empty for default.", "Directory name pattern": "Use tags like [seed] and [date] to define how subdirectories for images and grids are chosen. Leave empty for default.",
"Max prompt words": "Set the maximum number of words to be used in the [prompt_words] option; ATTENTION: If the words are too long, they may exceed the maximum length of the file path that the system can handle", "Max prompt words": "Set the maximum number of words to be used in the [prompt_words] option; ATTENTION: If the words are too long, they may exceed the maximum length of the file path that the system can handle",
"Loopback": "Performs img2img processing multiple times. Output images are used as input for the next loop.", "Loopback": "Performs img2img processing multiple times. Output images are used as input for the next loop.",
@ -82,10 +84,7 @@ titles = {
"Checkpoint name": "Loads weights from checkpoint before making images. You can either use hash or a part of filename (as seen in settings) for checkpoint name. Recommended to use with Y axis for less switching.", "Checkpoint name": "Loads weights from checkpoint before making images. You can either use hash or a part of filename (as seen in settings) for checkpoint name. Recommended to use with Y axis for less switching.",
"Inpainting conditioning mask strength": "Only applies to inpainting models. Determines how strongly to mask off the original image for inpainting and img2img. 1.0 means fully masked, which is the default behaviour. 0.0 means a fully unmasked conditioning. Lower values will help preserve the overall composition of the image, but will struggle with large changes.", "Inpainting conditioning mask strength": "Only applies to inpainting models. Determines how strongly to mask off the original image for inpainting and img2img. 1.0 means fully masked, which is the default behaviour. 0.0 means a fully unmasked conditioning. Lower values will help preserve the overall composition of the image, but will struggle with large changes.",
"vram": "Torch active: Peak amount of VRAM used by Torch during generation, excluding cached data.\nTorch reserved: Peak amount of VRAM allocated by Torch, including all active and cached data.\nSys VRAM: Peak amount of VRAM allocation across all applications / total GPU VRAM (peak utilization%).",
"Eta noise seed delta": "If this values is non-zero, it will be added to seed and used to initialize RNG for noises when using samplers with Eta. You can use this to produce even more variation of images, or you can use this to match images of other software if you know what you are doing.", "Eta noise seed delta": "If this values is non-zero, it will be added to seed and used to initialize RNG for noises when using samplers with Eta. You can use this to produce even more variation of images, or you can use this to match images of other software if you know what you are doing.",
"Do not add watermark to images": "If this option is enabled, watermark will not be added to created images. Warning: if you do not add watermark, you may be behaving in an unethical manner.",
"Filename word regex": "This regular expression will be used extract words from filename, and they will be joined using the option below into label text used for training. Leave empty to keep filename text as it is.", "Filename word regex": "This regular expression will be used extract words from filename, and they will be joined using the option below into label text used for training. Leave empty to keep filename text as it is.",
"Filename join string": "This string will be used to join split words into a single line if the option above is enabled.", "Filename join string": "This string will be used to join split words into a single line if the option above is enabled.",
@ -109,39 +108,85 @@ titles = {
"Upscale by": "Adjusts the size of the image by multiplying the original width and height by the selected value. Ignored if either Resize width to or Resize height to are non-zero.", "Upscale by": "Adjusts the size of the image by multiplying the original width and height by the selected value. Ignored if either Resize width to or Resize height to are non-zero.",
"Resize width to": "Resizes image to this width. If 0, width is inferred from either of two nearby sliders.", "Resize width to": "Resizes image to this width. If 0, width is inferred from either of two nearby sliders.",
"Resize height to": "Resizes image to this height. If 0, height is inferred from either of two nearby sliders.", "Resize height to": "Resizes image to this height. If 0, height is inferred from either of two nearby sliders.",
"Multiplier for extra networks": "When adding extra network such as Hypernetwork or Lora to prompt, use this multiplier for it.",
"Discard weights with matching name": "Regular expression; if weights's name matches it, the weights is not written to the resulting checkpoint. Use ^model_ema to discard EMA weights.", "Discard weights with matching name": "Regular expression; if weights's name matches it, the weights is not written to the resulting checkpoint. Use ^model_ema to discard EMA weights.",
"Extra networks tab order": "Comma-separated list of tab names; tabs listed here will appear in the extra networks UI first and in order lsited." "Extra networks tab order": "Comma-separated list of tab names; tabs listed here will appear in the extra networks UI first and in order listed.",
} "Negative Guidance minimum sigma": "Skip negative prompt for steps where image is already mostly denoised; the higher this value, the more skips there will be; provides increased performance in exchange for minor quality reduction."
};
function updateTooltip(element) {
if (element.title) return; // already has a title
onUiUpdate(function(){ let text = element.textContent;
gradioApp().querySelectorAll('span, button, select, p').forEach(function(span){ let tooltip = localization[titles[text]] || titles[text];
tooltip = titles[span.textContent];
if(!tooltip){ if (!tooltip) {
tooltip = titles[span.value]; let value = element.value;
if (value) tooltip = localization[titles[value]] || titles[value];
} }
if(!tooltip){ if (!tooltip) {
for (const c of span.classList) { // Gradio dropdown options have `data-value`.
let dataValue = element.dataset.value;
if (dataValue) tooltip = localization[titles[dataValue]] || titles[dataValue];
}
if (!tooltip) {
for (const c of element.classList) {
if (c in titles) { if (c in titles) {
tooltip = titles[c]; tooltip = localization[titles[c]] || titles[c];
break; break;
} }
} }
} }
if(tooltip){ if (tooltip) {
span.title = tooltip; element.title = tooltip;
} }
}) }
gradioApp().querySelectorAll('select').forEach(function(select){ // Nodes to check for adding tooltips.
if (select.onchange != null) return; const tooltipCheckNodes = new Set();
// Timer for debouncing tooltip check.
let tooltipCheckTimer = null;
select.onchange = function(){ function processTooltipCheckNodes() {
select.title = titles[select.value] || ""; for (const node of tooltipCheckNodes) {
updateTooltip(node);
} }
}) tooltipCheckNodes.clear();
}) }
onUiUpdate(function(mutationRecords) {
for (const record of mutationRecords) {
if (record.type === "childList" && record.target.classList.contains("options")) {
// This smells like a Gradio dropdown menu having changed,
// so let's enqueue an update for the input element that shows the current value.
let wrap = record.target.parentNode;
let input = wrap?.querySelector("input");
if (input) {
input.title = ""; // So we'll even have a chance to update it.
tooltipCheckNodes.add(input);
}
}
for (const node of record.addedNodes) {
if (node.nodeType === Node.ELEMENT_NODE && !node.classList.contains("hide")) {
if (!node.title) {
if (
node.tagName === "SPAN" ||
node.tagName === "BUTTON" ||
node.tagName === "P" ||
node.tagName === "INPUT" ||
(node.tagName === "LI" && node.classList.contains("item")) // Gradio dropdown item
) {
tooltipCheckNodes.add(node);
}
}
node.querySelectorAll('span, button, p').forEach(n => tooltipCheckNodes.add(n));
}
}
}
if (tooltipCheckNodes.size) {
clearTimeout(tooltipCheckTimer);
tooltipCheckTimer = setTimeout(processTooltipCheckNodes, 1000);
}
});

View File

@ -1,22 +1,18 @@
function setInactive(elem, inactive){ function onCalcResolutionHires(enable, width, height, hr_scale, hr_resize_x, hr_resize_y) {
if(inactive){ function setInactive(elem, inactive) {
elem.classList.add('inactive') elem.classList.toggle('inactive', !!inactive);
} else{
elem.classList.remove('inactive')
} }
}
var hrUpscaleBy = gradioApp().getElementById('txt2img_hr_scale');
function onCalcResolutionHires(enable, width, height, hr_scale, hr_resize_x, hr_resize_y){ var hrResizeX = gradioApp().getElementById('txt2img_hr_resize_x');
hrUpscaleBy = gradioApp().getElementById('txt2img_hr_scale') var hrResizeY = gradioApp().getElementById('txt2img_hr_resize_y');
hrResizeX = gradioApp().getElementById('txt2img_hr_resize_x')
hrResizeY = gradioApp().getElementById('txt2img_hr_resize_y') gradioApp().getElementById('txt2img_hires_fix_row2').style.display = opts.use_old_hires_fix_width_height ? "none" : "";
gradioApp().getElementById('txt2img_hires_fix_row2').style.display = opts.use_old_hires_fix_width_height ? "none" : "" setInactive(hrUpscaleBy, opts.use_old_hires_fix_width_height || hr_resize_x > 0 || hr_resize_y > 0);
setInactive(hrResizeX, opts.use_old_hires_fix_width_height || hr_resize_x == 0);
setInactive(hrUpscaleBy, opts.use_old_hires_fix_width_height || hr_resize_x > 0 || hr_resize_y > 0) setInactive(hrResizeY, opts.use_old_hires_fix_width_height || hr_resize_y == 0);
setInactive(hrResizeX, opts.use_old_hires_fix_width_height || hr_resize_x == 0)
setInactive(hrResizeY, opts.use_old_hires_fix_width_height || hr_resize_y == 0) return [enable, width, height, hr_scale, hr_resize_x, hr_resize_y];
return [enable, width, height, hr_scale, hr_resize_x, hr_resize_y]
} }

View File

@ -1,21 +1,19 @@
/** /**
* temporary fix for https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/668 * temporary fix for https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/668
* @see https://github.com/gradio-app/gradio/issues/1721 * @see https://ghproxy.com/https://github.com/gradio-app/gradio/issues/1721
*/ */
window.addEventListener( 'resize', () => imageMaskResize());
function imageMaskResize() { function imageMaskResize() {
const canvases = gradioApp().querySelectorAll('#img2maskimg .touch-none canvas'); const canvases = gradioApp().querySelectorAll('#img2maskimg .touch-none canvas');
if ( ! canvases.length ) { if (!canvases.length) {
canvases_fixed = false; window.removeEventListener('resize', imageMaskResize);
window.removeEventListener( 'resize', imageMaskResize );
return; return;
} }
const wrapper = canvases[0].closest('.touch-none'); const wrapper = canvases[0].closest('.touch-none');
const previewImage = wrapper.previousElementSibling; const previewImage = wrapper.previousElementSibling;
if ( ! previewImage.complete ) { if (!previewImage.complete) {
previewImage.addEventListener( 'load', () => imageMaskResize()); previewImage.addEventListener('load', imageMaskResize);
return; return;
} }
@ -24,22 +22,22 @@ function imageMaskResize() {
const nw = previewImage.naturalWidth; const nw = previewImage.naturalWidth;
const nh = previewImage.naturalHeight; const nh = previewImage.naturalHeight;
const portrait = nh > nw; const portrait = nh > nw;
const factor = portrait;
const wW = Math.min(w, portrait ? h/nh*nw : w/nw*nw); const wW = Math.min(w, portrait ? h / nh * nw : w / nw * nw);
const wH = Math.min(h, portrait ? h/nh*nh : w/nw*nh); const wH = Math.min(h, portrait ? h / nh * nh : w / nw * nh);
wrapper.style.width = `${wW}px`; wrapper.style.width = `${wW}px`;
wrapper.style.height = `${wH}px`; wrapper.style.height = `${wH}px`;
wrapper.style.left = `0px`; wrapper.style.left = `0px`;
wrapper.style.top = `0px`; wrapper.style.top = `0px`;
canvases.forEach( c => { canvases.forEach(c => {
c.style.width = c.style.height = ''; c.style.width = c.style.height = '';
c.style.maxWidth = '100%'; c.style.maxWidth = '100%';
c.style.maxHeight = '100%'; c.style.maxHeight = '100%';
c.style.objectFit = 'contain'; c.style.objectFit = 'contain';
}); });
} }
onUiUpdate(() => imageMaskResize()); onAfterUiUpdate(imageMaskResize);
window.addEventListener('resize', imageMaskResize);

View File

@ -1,19 +0,0 @@
window.onload = (function(){
window.addEventListener('drop', e => {
const target = e.composedPath()[0];
const idx = selected_gallery_index();
if (target.placeholder.indexOf("Prompt") == -1) return;
let prompt_target = get_tab_index('tabs') == 1 ? "img2img_prompt_image" : "txt2img_prompt_image";
e.stopPropagation();
e.preventDefault();
const imgParent = gradioApp().getElementById(prompt_target);
const files = e.dataTransfer.files;
const fileInput = imgParent.querySelector('input[type="file"]');
if ( fileInput ) {
fileInput.files = files;
fileInput.dispatchEvent(new Event('change'));
}
});
});

View File

@ -5,24 +5,24 @@ function closeModal() {
function showModal(event) { function showModal(event) {
const source = event.target || event.srcElement; const source = event.target || event.srcElement;
const modalImage = gradioApp().getElementById("modalImage") const modalImage = gradioApp().getElementById("modalImage");
const lb = gradioApp().getElementById("lightboxModal") const lb = gradioApp().getElementById("lightboxModal");
modalImage.src = source.src modalImage.src = source.src;
if (modalImage.style.display === 'none') { if (modalImage.style.display === 'none') {
lb.style.setProperty('background-image', 'url(' + source.src + ')'); lb.style.setProperty('background-image', 'url(' + source.src + ')');
} }
lb.style.display = "flex"; lb.style.display = "flex";
lb.focus() lb.focus();
const tabTxt2Img = gradioApp().getElementById("tab_txt2img") const tabTxt2Img = gradioApp().getElementById("tab_txt2img");
const tabImg2Img = gradioApp().getElementById("tab_img2img") const tabImg2Img = gradioApp().getElementById("tab_img2img");
// show the save button in modal only on txt2img or img2img tabs // show the save button in modal only on txt2img or img2img tabs
if (tabTxt2Img.style.display != "none" || tabImg2Img.style.display != "none") { if (tabTxt2Img.style.display != "none" || tabImg2Img.style.display != "none") {
gradioApp().getElementById("modal_save").style.display = "inline" gradioApp().getElementById("modal_save").style.display = "inline";
} else { } else {
gradioApp().getElementById("modal_save").style.display = "none" gradioApp().getElementById("modal_save").style.display = "none";
} }
event.stopPropagation() event.stopPropagation();
} }
function negmod(n, m) { function negmod(n, m) {
@ -30,14 +30,15 @@ function negmod(n, m) {
} }
function updateOnBackgroundChange() { function updateOnBackgroundChange() {
const modalImage = gradioApp().getElementById("modalImage") const modalImage = gradioApp().getElementById("modalImage");
if (modalImage && modalImage.offsetParent) { if (modalImage && modalImage.offsetParent) {
let currentButton = selected_gallery_button(); let currentButton = selected_gallery_button();
if (currentButton?.children?.length > 0 && modalImage.src != currentButton.children[0].src) { if (currentButton?.children?.length > 0 && modalImage.src != currentButton.children[0].src) {
modalImage.src = currentButton.children[0].src; modalImage.src = currentButton.children[0].src;
if (modalImage.style.display === 'none') { if (modalImage.style.display === 'none') {
modal.style.setProperty('background-image', `url(${modalImage.src})`) const modal = gradioApp().getElementById("lightboxModal");
modal.style.setProperty('background-image', `url(${modalImage.src})`);
} }
} }
} }
@ -49,68 +50,68 @@ function modalImageSwitch(offset) {
if (galleryButtons.length > 1) { if (galleryButtons.length > 1) {
var currentButton = selected_gallery_button(); var currentButton = selected_gallery_button();
var result = -1 var result = -1;
galleryButtons.forEach(function(v, i) { galleryButtons.forEach(function(v, i) {
if (v == currentButton) { if (v == currentButton) {
result = i result = i;
} }
}) });
if (result != -1) { if (result != -1) {
nextButton = galleryButtons[negmod((result + offset), galleryButtons.length)] var nextButton = galleryButtons[negmod((result + offset), galleryButtons.length)];
nextButton.click() nextButton.click();
const modalImage = gradioApp().getElementById("modalImage"); const modalImage = gradioApp().getElementById("modalImage");
const modal = gradioApp().getElementById("lightboxModal"); const modal = gradioApp().getElementById("lightboxModal");
modalImage.src = nextButton.children[0].src; modalImage.src = nextButton.children[0].src;
if (modalImage.style.display === 'none') { if (modalImage.style.display === 'none') {
modal.style.setProperty('background-image', `url(${modalImage.src})`) modal.style.setProperty('background-image', `url(${modalImage.src})`);
} }
setTimeout(function() { setTimeout(function() {
modal.focus() modal.focus();
}, 10) }, 10);
} }
} }
} }
function saveImage(){ function saveImage() {
const tabTxt2Img = gradioApp().getElementById("tab_txt2img") const tabTxt2Img = gradioApp().getElementById("tab_txt2img");
const tabImg2Img = gradioApp().getElementById("tab_img2img") const tabImg2Img = gradioApp().getElementById("tab_img2img");
const saveTxt2Img = "save_txt2img" const saveTxt2Img = "save_txt2img";
const saveImg2Img = "save_img2img" const saveImg2Img = "save_img2img";
if (tabTxt2Img.style.display != "none") { if (tabTxt2Img.style.display != "none") {
gradioApp().getElementById(saveTxt2Img).click() gradioApp().getElementById(saveTxt2Img).click();
} else if (tabImg2Img.style.display != "none") { } else if (tabImg2Img.style.display != "none") {
gradioApp().getElementById(saveImg2Img).click() gradioApp().getElementById(saveImg2Img).click();
} else { } else {
console.error("missing implementation for saving modal of this type") console.error("missing implementation for saving modal of this type");
} }
} }
function modalSaveImage(event) { function modalSaveImage(event) {
saveImage() saveImage();
event.stopPropagation() event.stopPropagation();
} }
function modalNextImage(event) { function modalNextImage(event) {
modalImageSwitch(1) modalImageSwitch(1);
event.stopPropagation() event.stopPropagation();
} }
function modalPrevImage(event) { function modalPrevImage(event) {
modalImageSwitch(-1) modalImageSwitch(-1);
event.stopPropagation() event.stopPropagation();
} }
function modalKeyHandler(event) { function modalKeyHandler(event) {
switch (event.key) { switch (event.key) {
case "s": case "s":
saveImage() saveImage();
break; break;
case "ArrowLeft": case "ArrowLeft":
modalPrevImage(event) modalPrevImage(event);
break; break;
case "ArrowRight": case "ArrowRight":
modalNextImage(event) modalNextImage(event);
break; break;
case "Escape": case "Escape":
closeModal(); closeModal();
@ -119,42 +120,39 @@ function modalKeyHandler(event) {
} }
function setupImageForLightbox(e) { function setupImageForLightbox(e) {
if (e.dataset.modded) if (e.dataset.modded) {
return; return;
}
e.dataset.modded = true; e.dataset.modded = true;
e.style.cursor='pointer' e.style.cursor = 'pointer';
e.style.userSelect='none' e.style.userSelect = 'none';
var isFirefox = navigator.userAgent.toLowerCase().indexOf('firefox') > -1 var isFirefox = navigator.userAgent.toLowerCase().indexOf('firefox') > -1;
// For Firefox, listening on click first switched to next image then shows the lightbox. // For Firefox, listening on click first switched to next image then shows the lightbox.
// If you know how to fix this without switching to mousedown event, please. // If you know how to fix this without switching to mousedown event, please.
// For other browsers the event is click to make it possiblr to drag picture. // For other browsers the event is click to make it possiblr to drag picture.
var event = isFirefox ? 'mousedown' : 'click' var event = isFirefox ? 'mousedown' : 'click';
e.addEventListener(event, function (evt) { e.addEventListener(event, function(evt) {
if(!opts.js_modal_lightbox || evt.button != 0) return; if (!opts.js_modal_lightbox || evt.button != 0) return;
modalZoomSet(gradioApp().getElementById('modalImage'), opts.js_modal_lightbox_initially_zoomed) modalZoomSet(gradioApp().getElementById('modalImage'), opts.js_modal_lightbox_initially_zoomed);
evt.preventDefault() evt.preventDefault();
showModal(evt) showModal(evt);
}, true); }, true);
} }
function modalZoomSet(modalImage, enable) { function modalZoomSet(modalImage, enable) {
if (enable) { if (modalImage) modalImage.classList.toggle('modalImageFullscreen', !!enable);
modalImage.classList.add('modalImageFullscreen');
} else {
modalImage.classList.remove('modalImageFullscreen');
}
} }
function modalZoomToggle(event) { function modalZoomToggle(event) {
modalImage = gradioApp().getElementById("modalImage"); var modalImage = gradioApp().getElementById("modalImage");
modalZoomSet(modalImage, !modalImage.classList.contains('modalImageFullscreen')) modalZoomSet(modalImage, !modalImage.classList.contains('modalImageFullscreen'));
event.stopPropagation() event.stopPropagation();
} }
function modalTileImageToggle(event) { function modalTileImageToggle(event) {
@ -163,96 +161,93 @@ function modalTileImageToggle(event) {
const isTiling = modalImage.style.display === 'none'; const isTiling = modalImage.style.display === 'none';
if (isTiling) { if (isTiling) {
modalImage.style.display = 'block'; modalImage.style.display = 'block';
modal.style.setProperty('background-image', 'none') modal.style.setProperty('background-image', 'none');
} else { } else {
modalImage.style.display = 'none'; modalImage.style.display = 'none';
modal.style.setProperty('background-image', `url(${modalImage.src})`) modal.style.setProperty('background-image', `url(${modalImage.src})`);
} }
event.stopPropagation() event.stopPropagation();
} }
function galleryImageHandler(e) { onAfterUiUpdate(function() {
//if (e && e.parentElement.tagName == 'BUTTON') { var fullImg_preview = gradioApp().querySelectorAll('.gradio-gallery > div > img');
e.onclick = showGalleryImage;
//}
}
onUiUpdate(function() {
fullImg_preview = gradioApp().querySelectorAll('.gradio-gallery > div > img')
if (fullImg_preview != null) { if (fullImg_preview != null) {
fullImg_preview.forEach(setupImageForLightbox); fullImg_preview.forEach(setupImageForLightbox);
} }
updateOnBackgroundChange(); updateOnBackgroundChange();
}) });
document.addEventListener("DOMContentLoaded", function() { document.addEventListener("DOMContentLoaded", function() {
//const modalFragment = document.createDocumentFragment(); //const modalFragment = document.createDocumentFragment();
const modal = document.createElement('div') const modal = document.createElement('div');
modal.onclick = closeModal; modal.onclick = closeModal;
modal.id = "lightboxModal"; modal.id = "lightboxModal";
modal.tabIndex = 0 modal.tabIndex = 0;
modal.addEventListener('keydown', modalKeyHandler, true) modal.addEventListener('keydown', modalKeyHandler, true);
const modalControls = document.createElement('div') const modalControls = document.createElement('div');
modalControls.className = 'modalControls gradio-container'; modalControls.className = 'modalControls gradio-container';
modal.append(modalControls); modal.append(modalControls);
const modalZoom = document.createElement('span') const modalZoom = document.createElement('span');
modalZoom.className = 'modalZoom cursor'; modalZoom.className = 'modalZoom cursor';
modalZoom.innerHTML = '&#10529;' modalZoom.innerHTML = '&#10529;';
modalZoom.addEventListener('click', modalZoomToggle, true) modalZoom.addEventListener('click', modalZoomToggle, true);
modalZoom.title = "Toggle zoomed view"; modalZoom.title = "Toggle zoomed view";
modalControls.appendChild(modalZoom) modalControls.appendChild(modalZoom);
const modalTileImage = document.createElement('span') const modalTileImage = document.createElement('span');
modalTileImage.className = 'modalTileImage cursor'; modalTileImage.className = 'modalTileImage cursor';
modalTileImage.innerHTML = '&#8862;' modalTileImage.innerHTML = '&#8862;';
modalTileImage.addEventListener('click', modalTileImageToggle, true) modalTileImage.addEventListener('click', modalTileImageToggle, true);
modalTileImage.title = "Preview tiling"; modalTileImage.title = "Preview tiling";
modalControls.appendChild(modalTileImage) modalControls.appendChild(modalTileImage);
const modalSave = document.createElement("span") const modalSave = document.createElement("span");
modalSave.className = "modalSave cursor" modalSave.className = "modalSave cursor";
modalSave.id = "modal_save" modalSave.id = "modal_save";
modalSave.innerHTML = "&#x1F5AB;" modalSave.innerHTML = "&#x1F5AB;";
modalSave.addEventListener("click", modalSaveImage, true) modalSave.addEventListener("click", modalSaveImage, true);
modalSave.title = "Save Image(s)" modalSave.title = "Save Image(s)";
modalControls.appendChild(modalSave) modalControls.appendChild(modalSave);
const modalClose = document.createElement('span') const modalClose = document.createElement('span');
modalClose.className = 'modalClose cursor'; modalClose.className = 'modalClose cursor';
modalClose.innerHTML = '&times;' modalClose.innerHTML = '&times;';
modalClose.onclick = closeModal; modalClose.onclick = closeModal;
modalClose.title = "Close image viewer"; modalClose.title = "Close image viewer";
modalControls.appendChild(modalClose) modalControls.appendChild(modalClose);
const modalImage = document.createElement('img') const modalImage = document.createElement('img');
modalImage.id = 'modalImage'; modalImage.id = 'modalImage';
modalImage.onclick = closeModal; modalImage.onclick = closeModal;
modalImage.tabIndex = 0 modalImage.tabIndex = 0;
modalImage.addEventListener('keydown', modalKeyHandler, true) modalImage.addEventListener('keydown', modalKeyHandler, true);
modal.appendChild(modalImage) modal.appendChild(modalImage);
const modalPrev = document.createElement('a') const modalPrev = document.createElement('a');
modalPrev.className = 'modalPrev'; modalPrev.className = 'modalPrev';
modalPrev.innerHTML = '&#10094;' modalPrev.innerHTML = '&#10094;';
modalPrev.tabIndex = 0 modalPrev.tabIndex = 0;
modalPrev.addEventListener('click', modalPrevImage, true); modalPrev.addEventListener('click', modalPrevImage, true);
modalPrev.addEventListener('keydown', modalKeyHandler, true) modalPrev.addEventListener('keydown', modalKeyHandler, true);
modal.appendChild(modalPrev) modal.appendChild(modalPrev);
const modalNext = document.createElement('a') const modalNext = document.createElement('a');
modalNext.className = 'modalNext'; modalNext.className = 'modalNext';
modalNext.innerHTML = '&#10095;' modalNext.innerHTML = '&#10095;';
modalNext.tabIndex = 0 modalNext.tabIndex = 0;
modalNext.addEventListener('click', modalNextImage, true); modalNext.addEventListener('click', modalNextImage, true);
modalNext.addEventListener('keydown', modalKeyHandler, true) modalNext.addEventListener('keydown', modalKeyHandler, true);
modal.appendChild(modalNext) modal.appendChild(modalNext);
gradioApp().appendChild(modal)
try {
gradioApp().appendChild(modal);
} catch (e) {
gradioApp().body.appendChild(modal);
}
document.body.appendChild(modal); document.body.appendChild(modal);

View File

@ -0,0 +1,63 @@
let gamepads = [];
window.addEventListener('gamepadconnected', (e) => {
const index = e.gamepad.index;
let isWaiting = false;
gamepads[index] = setInterval(async() => {
if (!opts.js_modal_lightbox_gamepad || isWaiting) return;
const gamepad = navigator.getGamepads()[index];
const xValue = gamepad.axes[0];
if (xValue <= -0.3) {
modalPrevImage(e);
isWaiting = true;
} else if (xValue >= 0.3) {
modalNextImage(e);
isWaiting = true;
}
if (isWaiting) {
await sleepUntil(() => {
const xValue = navigator.getGamepads()[index].axes[0];
if (xValue < 0.3 && xValue > -0.3) {
return true;
}
}, opts.js_modal_lightbox_gamepad_repeat);
isWaiting = false;
}
}, 10);
});
window.addEventListener('gamepaddisconnected', (e) => {
clearInterval(gamepads[e.gamepad.index]);
});
/*
Primarily for vr controller type pointer devices.
I use the wheel event because there's currently no way to do it properly with web xr.
*/
let isScrolling = false;
window.addEventListener('wheel', (e) => {
if (!opts.js_modal_lightbox_gamepad || isScrolling) return;
isScrolling = true;
if (e.deltaX <= -0.6) {
modalPrevImage(e);
} else if (e.deltaX >= 0.6) {
modalNextImage(e);
}
setTimeout(() => {
isScrolling = false;
}, opts.js_modal_lightbox_gamepad_repeat);
});
function sleepUntil(f, timeout) {
return new Promise((resolve) => {
const timeStart = new Date();
const wait = setInterval(function() {
if (f() || new Date() - timeStart > timeout) {
clearInterval(wait);
resolve();
}
}, 20);
});
}

View File

@ -1,10 +1,9 @@
// localization = {} -- the dict with translations is created by the backend // localization = {} -- the dict with translations is created by the backend
ignore_ids_for_localization={ var ignore_ids_for_localization = {
setting_sd_hypernetwork: 'OPTION', setting_sd_hypernetwork: 'OPTION',
setting_sd_model_checkpoint: 'OPTION', setting_sd_model_checkpoint: 'OPTION',
setting_realesrgan_enabled_models: 'OPTION',
modelmerger_primary_model_name: 'OPTION', modelmerger_primary_model_name: 'OPTION',
modelmerger_secondary_model_name: 'OPTION', modelmerger_secondary_model_name: 'OPTION',
modelmerger_tertiary_model_name: 'OPTION', modelmerger_tertiary_model_name: 'OPTION',
@ -17,119 +16,145 @@ ignore_ids_for_localization={
setting_realesrgan_enabled_models: 'SPAN', setting_realesrgan_enabled_models: 'SPAN',
extras_upscaler_1: 'SPAN', extras_upscaler_1: 'SPAN',
extras_upscaler_2: 'SPAN', extras_upscaler_2: 'SPAN',
};
var re_num = /^[.\d]+$/;
var re_emoji = /[\p{Extended_Pictographic}\u{1F3FB}-\u{1F3FF}\u{1F9B0}-\u{1F9B3}]/u;
var original_lines = {};
var translated_lines = {};
function hasLocalization() {
return window.localization && Object.keys(window.localization).length > 0;
} }
re_num = /^[\.\d]+$/ function textNodesUnder(el) {
re_emoji = /[\p{Extended_Pictographic}\u{1F3FB}-\u{1F3FF}\u{1F9B0}-\u{1F9B3}]/u var n, a = [], walk = document.createTreeWalker(el, NodeFilter.SHOW_TEXT, null, false);
while ((n = walk.nextNode())) a.push(n);
original_lines = {}
translated_lines = {}
function textNodesUnder(el){
var n, a=[], walk=document.createTreeWalker(el,NodeFilter.SHOW_TEXT,null,false);
while(n=walk.nextNode()) a.push(n);
return a; return a;
} }
function canBeTranslated(node, text){ function canBeTranslated(node, text) {
if(! text) return false; if (!text) return false;
if(! node.parentElement) return false; if (!node.parentElement) return false;
parentType = node.parentElement.nodeName var parentType = node.parentElement.nodeName;
if(parentType=='SCRIPT' || parentType=='STYLE' || parentType=='TEXTAREA') return false; if (parentType == 'SCRIPT' || parentType == 'STYLE' || parentType == 'TEXTAREA') return false;
if (parentType=='OPTION' || parentType=='SPAN'){ if (parentType == 'OPTION' || parentType == 'SPAN') {
pnode = node var pnode = node;
for(var level=0; level<4; level++){ for (var level = 0; level < 4; level++) {
pnode = pnode.parentElement pnode = pnode.parentElement;
if(! pnode) break; if (!pnode) break;
if(ignore_ids_for_localization[pnode.id] == parentType) return false; if (ignore_ids_for_localization[pnode.id] == parentType) return false;
} }
} }
if(re_num.test(text)) return false; if (re_num.test(text)) return false;
if(re_emoji.test(text)) return false; if (re_emoji.test(text)) return false;
return true return true;
} }
function getTranslation(text){ function getTranslation(text) {
if(! text) return undefined if (!text) return undefined;
if(translated_lines[text] === undefined){ if (translated_lines[text] === undefined) {
original_lines[text] = 1 original_lines[text] = 1;
} }
tl = localization[text] var tl = localization[text];
if(tl !== undefined){ if (tl !== undefined) {
translated_lines[tl] = 1 translated_lines[tl] = 1;
} }
return tl return tl;
} }
function processTextNode(node){ function processTextNode(node) {
text = node.textContent.trim() var text = node.textContent.trim();
if(! canBeTranslated(node, text)) return if (!canBeTranslated(node, text)) return;
tl = getTranslation(text) var tl = getTranslation(text);
if(tl !== undefined){ if (tl !== undefined) {
node.textContent = tl node.textContent = tl;
} }
} }
function processNode(node){ function processNode(node) {
if(node.nodeType == 3){ if (node.nodeType == 3) {
processTextNode(node) processTextNode(node);
return return;
} }
if(node.title){ if (node.title) {
tl = getTranslation(node.title) let tl = getTranslation(node.title);
if(tl !== undefined){ if (tl !== undefined) {
node.title = tl node.title = tl;
} }
} }
if(node.placeholder){ if (node.placeholder) {
tl = getTranslation(node.placeholder) let tl = getTranslation(node.placeholder);
if(tl !== undefined){ if (tl !== undefined) {
node.placeholder = tl node.placeholder = tl;
} }
} }
textNodesUnder(node).forEach(function(node){ textNodesUnder(node).forEach(function(node) {
processTextNode(node) processTextNode(node);
})
}
function dumpTranslations(){
dumped = {}
if (localization.rtl) {
dumped.rtl = true
}
Object.keys(original_lines).forEach(function(text){
if(dumped[text] !== undefined) return
dumped[text] = localization[text] || text
})
return dumped
}
onUiUpdate(function(m){
m.forEach(function(mutation){
mutation.addedNodes.forEach(function(node){
processNode(node)
})
}); });
}) }
function dumpTranslations() {
if (!hasLocalization()) {
// If we don't have any localization,
// we will not have traversed the app to find
// original_lines, so do that now.
processNode(gradioApp());
}
var dumped = {};
if (localization.rtl) {
dumped.rtl = true;
}
for (const text in original_lines) {
if (dumped[text] !== undefined) continue;
dumped[text] = localization[text] || text;
}
return dumped;
}
function download_localization() {
var text = JSON.stringify(dumpTranslations(), null, 4);
var element = document.createElement('a');
element.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(text));
element.setAttribute('download', "localization.json");
element.style.display = 'none';
document.body.appendChild(element);
element.click();
document.body.removeChild(element);
}
document.addEventListener("DOMContentLoaded", function() { document.addEventListener("DOMContentLoaded", function() {
processNode(gradioApp()) if (!hasLocalization()) {
return;
}
onUiUpdate(function(m) {
m.forEach(function(mutation) {
mutation.addedNodes.forEach(function(node) {
processNode(node);
});
});
});
processNode(gradioApp());
if (localization.rtl) { // if the language is from right to left, if (localization.rtl) { // if the language is from right to left,
(new MutationObserver((mutations, observer) => { // wait for the style to load (new MutationObserver((mutations, observer) => { // wait for the style to load
@ -144,22 +169,8 @@ document.addEventListener("DOMContentLoaded", function() {
} }
} }
} }
})
}); });
})).observe(gradioApp(), { childList: true }); });
})).observe(gradioApp(), {childList: true});
} }
}) });
function download_localization() {
text = JSON.stringify(dumpTranslations(), null, 4)
var element = document.createElement('a');
element.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(text));
element.setAttribute('download', "localization.json");
element.style.display = 'none';
document.body.appendChild(element);
element.click();
document.body.removeChild(element);
}

View File

@ -2,16 +2,16 @@
let lastHeadImg = null; let lastHeadImg = null;
notificationButton = null let notificationButton = null;
onUiUpdate(function(){ onAfterUiUpdate(function() {
if(notificationButton == null){ if (notificationButton == null) {
notificationButton = gradioApp().getElementById('request_notifications') notificationButton = gradioApp().getElementById('request_notifications');
if(notificationButton != null){ if (notificationButton != null) {
notificationButton.addEventListener('click', function (evt) { notificationButton.addEventListener('click', () => {
Notification.requestPermission(); void Notification.requestPermission();
},true); }, true);
} }
} }
@ -42,7 +42,7 @@ onUiUpdate(function(){
} }
); );
notification.onclick = function(_){ notification.onclick = function(_) {
parent.focus(); parent.focus();
this.close(); this.close();
}; };

View File

@ -0,0 +1,153 @@
function createRow(table, cellName, items) {
var tr = document.createElement('tr');
var res = [];
items.forEach(function(x, i) {
if (x === undefined) {
res.push(null);
return;
}
var td = document.createElement(cellName);
td.textContent = x;
tr.appendChild(td);
res.push(td);
var colspan = 1;
for (var n = i + 1; n < items.length; n++) {
if (items[n] !== undefined) {
break;
}
colspan += 1;
}
if (colspan > 1) {
td.colSpan = colspan;
}
});
table.appendChild(tr);
return res;
}
function showProfile(path, cutoff = 0.05) {
requestGet(path, {}, function(data) {
var table = document.createElement('table');
table.className = 'popup-table';
data.records['total'] = data.total;
var keys = Object.keys(data.records).sort(function(a, b) {
return data.records[b] - data.records[a];
});
var items = keys.map(function(x) {
return {key: x, parts: x.split('/'), time: data.records[x]};
});
var maxLength = items.reduce(function(a, b) {
return Math.max(a, b.parts.length);
}, 0);
var cols = createRow(table, 'th', ['record', 'seconds']);
cols[0].colSpan = maxLength;
function arraysEqual(a, b) {
return !(a < b || b < a);
}
var addLevel = function(level, parent, hide) {
var matching = items.filter(function(x) {
return x.parts[level] && !x.parts[level + 1] && arraysEqual(x.parts.slice(0, level), parent);
});
var sorted = matching.sort(function(a, b) {
return b.time - a.time;
});
var othersTime = 0;
var othersList = [];
var othersRows = [];
var childrenRows = [];
sorted.forEach(function(x) {
var visible = x.time >= cutoff && !hide;
var cells = [];
for (var i = 0; i < maxLength; i++) {
cells.push(x.parts[i]);
}
cells.push(x.time.toFixed(3));
var cols = createRow(table, 'td', cells);
for (i = 0; i < level; i++) {
cols[i].className = 'muted';
}
var tr = cols[0].parentNode;
if (!visible) {
tr.classList.add("hidden");
}
if (x.time >= cutoff) {
childrenRows.push(tr);
} else {
othersTime += x.time;
othersList.push(x.parts[level]);
othersRows.push(tr);
}
var children = addLevel(level + 1, parent.concat([x.parts[level]]), true);
if (children.length > 0) {
var cell = cols[level];
var onclick = function() {
cell.classList.remove("link");
cell.removeEventListener("click", onclick);
children.forEach(function(x) {
x.classList.remove("hidden");
});
};
cell.classList.add("link");
cell.addEventListener("click", onclick);
}
});
if (othersTime > 0) {
var cells = [];
for (var i = 0; i < maxLength; i++) {
cells.push(parent[i]);
}
cells.push(othersTime.toFixed(3));
cells[level] = 'others';
var cols = createRow(table, 'td', cells);
for (i = 0; i < level; i++) {
cols[i].className = 'muted';
}
var cell = cols[level];
var tr = cell.parentNode;
var onclick = function() {
tr.classList.add("hidden");
cell.classList.remove("link");
cell.removeEventListener("click", onclick);
othersRows.forEach(function(x) {
x.classList.remove("hidden");
});
};
cell.title = othersList.join(", ");
cell.classList.add("link");
cell.addEventListener("click", onclick);
if (hide) {
tr.classList.add("hidden");
}
childrenRows.push(tr);
}
return childrenRows;
};
addLevel(0, []);
popup(table);
});
}

View File

@ -1,30 +1,29 @@
// code related to showing and updating progressbar shown as the image is being made // code related to showing and updating progressbar shown as the image is being made
function rememberGallerySelection(id_gallery){ function rememberGallerySelection() {
} }
function getGallerySelectedIndex(id_gallery){ function getGallerySelectedIndex() {
} }
function request(url, data, handler, errorHandler){ function request(url, data, handler, errorHandler) {
var xhr = new XMLHttpRequest(); var xhr = new XMLHttpRequest();
var url = url;
xhr.open("POST", url, true); xhr.open("POST", url, true);
xhr.setRequestHeader("Content-Type", "application/json"); xhr.setRequestHeader("Content-Type", "application/json");
xhr.onreadystatechange = function () { xhr.onreadystatechange = function() {
if (xhr.readyState === 4) { if (xhr.readyState === 4) {
if (xhr.status === 200) { if (xhr.status === 200) {
try { try {
var js = JSON.parse(xhr.responseText); var js = JSON.parse(xhr.responseText);
handler(js) handler(js);
} catch (error) { } catch (error) {
console.error(error); console.error(error);
errorHandler() errorHandler();
} }
} else{ } else {
errorHandler() errorHandler();
} }
} }
}; };
@ -32,147 +31,147 @@ function request(url, data, handler, errorHandler){
xhr.send(js); xhr.send(js);
} }
function pad2(x){ function pad2(x) {
return x<10 ? '0'+x : x return x < 10 ? '0' + x : x;
} }
function formatTime(secs){ function formatTime(secs) {
if(secs > 3600){ if (secs > 3600) {
return pad2(Math.floor(secs/60/60)) + ":" + pad2(Math.floor(secs/60)%60) + ":" + pad2(Math.floor(secs)%60) return pad2(Math.floor(secs / 60 / 60)) + ":" + pad2(Math.floor(secs / 60) % 60) + ":" + pad2(Math.floor(secs) % 60);
} else if(secs > 60){ } else if (secs > 60) {
return pad2(Math.floor(secs/60)) + ":" + pad2(Math.floor(secs)%60) return pad2(Math.floor(secs / 60)) + ":" + pad2(Math.floor(secs) % 60);
} else{ } else {
return Math.floor(secs) + "s" return Math.floor(secs) + "s";
} }
} }
function setTitle(progress){ function setTitle(progress) {
var title = 'Stable Diffusion' var title = 'Stable Diffusion';
if(opts.show_progress_in_title && progress){ if (opts.show_progress_in_title && progress) {
title = '[' + progress.trim() + '] ' + title; title = '[' + progress.trim() + '] ' + title;
} }
if(document.title != title){ if (document.title != title) {
document.title = title; document.title = title;
} }
} }
function randomId(){ function randomId() {
return "task(" + Math.random().toString(36).slice(2, 7) + Math.random().toString(36).slice(2, 7) + Math.random().toString(36).slice(2, 7)+")" return "task(" + Math.random().toString(36).slice(2, 7) + Math.random().toString(36).slice(2, 7) + Math.random().toString(36).slice(2, 7) + ")";
} }
// starts sending progress requests to "/internal/progress" uri, creating progressbar above progressbarContainer element and // starts sending progress requests to "/internal/progress" uri, creating progressbar above progressbarContainer element and
// preview inside gallery element. Cleans up all created stuff when the task is over and calls atEnd. // preview inside gallery element. Cleans up all created stuff when the task is over and calls atEnd.
// calls onProgress every time there is a progress update // calls onProgress every time there is a progress update
function requestProgress(id_task, progressbarContainer, gallery, atEnd, onProgress){ function requestProgress(id_task, progressbarContainer, gallery, atEnd, onProgress, inactivityTimeout = 40) {
var dateStart = new Date() var dateStart = new Date();
var wasEverActive = false var wasEverActive = false;
var parentProgressbar = progressbarContainer.parentNode var parentProgressbar = progressbarContainer.parentNode;
var parentGallery = gallery ? gallery.parentNode : null var parentGallery = gallery ? gallery.parentNode : null;
var divProgress = document.createElement('div') var divProgress = document.createElement('div');
divProgress.className='progressDiv' divProgress.className = 'progressDiv';
divProgress.style.display = opts.show_progressbar ? "block" : "none" divProgress.style.display = opts.show_progressbar ? "block" : "none";
var divInner = document.createElement('div') var divInner = document.createElement('div');
divInner.className='progress' divInner.className = 'progress';
divProgress.appendChild(divInner) divProgress.appendChild(divInner);
parentProgressbar.insertBefore(divProgress, progressbarContainer) parentProgressbar.insertBefore(divProgress, progressbarContainer);
if(parentGallery){ if (parentGallery) {
var livePreview = document.createElement('div') var livePreview = document.createElement('div');
livePreview.className='livePreview' livePreview.className = 'livePreview';
parentGallery.insertBefore(livePreview, gallery) parentGallery.insertBefore(livePreview, gallery);
} }
var removeProgressBar = function(){ var removeProgressBar = function() {
setTitle("") setTitle("");
parentProgressbar.removeChild(divProgress) parentProgressbar.removeChild(divProgress);
if(parentGallery) parentGallery.removeChild(livePreview) if (parentGallery) parentGallery.removeChild(livePreview);
atEnd() atEnd();
};
var fun = function(id_task, id_live_preview) {
request("./internal/progress", {id_task: id_task, id_live_preview: id_live_preview}, function(res) {
if (res.completed) {
removeProgressBar();
return;
} }
var fun = function(id_task, id_live_preview){ var rect = progressbarContainer.getBoundingClientRect();
request("./internal/progress", {"id_task": id_task, "id_live_preview": id_live_preview}, function(res){
if(res.completed){
removeProgressBar()
return
}
var rect = progressbarContainer.getBoundingClientRect() if (rect.width) {
if(rect.width){
divProgress.style.width = rect.width + "px"; divProgress.style.width = rect.width + "px";
} }
progressText = "" let progressText = "";
divInner.style.width = ((res.progress || 0) * 100.0) + '%' divInner.style.width = ((res.progress || 0) * 100.0) + '%';
divInner.style.background = res.progress ? "" : "transparent" divInner.style.background = res.progress ? "" : "transparent";
if(res.progress > 0){ if (res.progress > 0) {
progressText = ((res.progress || 0) * 100.0).toFixed(0) + '%' progressText = ((res.progress || 0) * 100.0).toFixed(0) + '%';
} }
if(res.eta){ if (res.eta) {
progressText += " ETA: " + formatTime(res.eta) progressText += " ETA: " + formatTime(res.eta);
} }
setTitle(progressText) setTitle(progressText);
if(res.textinfo && res.textinfo.indexOf("\n") == -1){ if (res.textinfo && res.textinfo.indexOf("\n") == -1) {
progressText = res.textinfo + " " + progressText progressText = res.textinfo + " " + progressText;
} }
divInner.textContent = progressText divInner.textContent = progressText;
var elapsedFromStart = (new Date() - dateStart) / 1000 var elapsedFromStart = (new Date() - dateStart) / 1000;
if(res.active) wasEverActive = true; if (res.active) wasEverActive = true;
if(! res.active && wasEverActive){ if (!res.active && wasEverActive) {
removeProgressBar() removeProgressBar();
return return;
} }
if(elapsedFromStart > 5 && !res.queued && !res.active){ if (elapsedFromStart > inactivityTimeout && !res.queued && !res.active) {
removeProgressBar() removeProgressBar();
return return;
} }
if(res.live_preview && gallery){ if (res.live_preview && gallery) {
var rect = gallery.getBoundingClientRect() rect = gallery.getBoundingClientRect();
if(rect.width){ if (rect.width) {
livePreview.style.width = rect.width + "px" livePreview.style.width = rect.width + "px";
livePreview.style.height = rect.height + "px" livePreview.style.height = rect.height + "px";
} }
var img = new Image(); var img = new Image();
img.onload = function() { img.onload = function() {
livePreview.appendChild(img) livePreview.appendChild(img);
if(livePreview.childElementCount > 2){ if (livePreview.childElementCount > 2) {
livePreview.removeChild(livePreview.firstElementChild) livePreview.removeChild(livePreview.firstElementChild);
}
} }
};
img.src = res.live_preview; img.src = res.live_preview;
} }
if(onProgress){ if (onProgress) {
onProgress(res) onProgress(res);
} }
setTimeout(() => { setTimeout(() => {
fun(id_task, res.id_live_preview); fun(id_task, res.id_live_preview);
}, opts.live_preview_refresh_period || 500) }, opts.live_preview_refresh_period || 500);
}, function(){ }, function() {
removeProgressBar() removeProgressBar();
}) });
} };
fun(id_task, 0) fun(id_task, 0);
} }

View File

@ -1,17 +1,17 @@
function start_training_textual_inversion(){ function start_training_textual_inversion() {
gradioApp().querySelector('#ti_error').innerHTML='' gradioApp().querySelector('#ti_error').innerHTML = '';
var id = randomId() var id = randomId();
requestProgress(id, gradioApp().getElementById('ti_output'), gradioApp().getElementById('ti_gallery'), function(){}, function(progress){ requestProgress(id, gradioApp().getElementById('ti_output'), gradioApp().getElementById('ti_gallery'), function() {}, function(progress) {
gradioApp().getElementById('ti_progress').innerHTML = progress.textinfo gradioApp().getElementById('ti_progress').innerHTML = progress.textinfo;
}) });
var res = args_to_array(arguments) var res = Array.from(arguments);
res[0] = id res[0] = id;
return res return res;
} }

View File

@ -0,0 +1,83 @@
let promptTokenCountDebounceTime = 800;
let promptTokenCountTimeouts = {};
var promptTokenCountUpdateFunctions = {};
function update_txt2img_tokens(...args) {
// Called from Gradio
update_token_counter("txt2img_token_button");
if (args.length == 2) {
return args[0];
}
return args;
}
function update_img2img_tokens(...args) {
// Called from Gradio
update_token_counter("img2img_token_button");
if (args.length == 2) {
return args[0];
}
return args;
}
function update_token_counter(button_id) {
if (opts.disable_token_counters) {
return;
}
if (promptTokenCountTimeouts[button_id]) {
clearTimeout(promptTokenCountTimeouts[button_id]);
}
promptTokenCountTimeouts[button_id] = setTimeout(
() => gradioApp().getElementById(button_id)?.click(),
promptTokenCountDebounceTime,
);
}
function recalculatePromptTokens(name) {
promptTokenCountUpdateFunctions[name]?.();
}
function recalculate_prompts_txt2img() {
// Called from Gradio
recalculatePromptTokens('txt2img_prompt');
recalculatePromptTokens('txt2img_neg_prompt');
return Array.from(arguments);
}
function recalculate_prompts_img2img() {
// Called from Gradio
recalculatePromptTokens('img2img_prompt');
recalculatePromptTokens('img2img_neg_prompt');
return Array.from(arguments);
}
function setupTokenCounting(id, id_counter, id_button) {
var prompt = gradioApp().getElementById(id);
var counter = gradioApp().getElementById(id_counter);
var textarea = gradioApp().querySelector(`#${id} > label > textarea`);
if (opts.disable_token_counters) {
counter.style.display = "none";
return;
}
if (counter.parentElement == prompt.parentElement) {
return;
}
prompt.parentElement.insertBefore(counter, prompt);
prompt.parentElement.style.position = "relative";
promptTokenCountUpdateFunctions[id] = function() {
update_token_counter(id_button);
};
textarea.addEventListener("input", promptTokenCountUpdateFunctions[id]);
}
function setupTokenCounters() {
setupTokenCounting('txt2img_prompt', 'txt2img_token_counter', 'txt2img_token_button');
setupTokenCounting('txt2img_neg_prompt', 'txt2img_negative_token_counter', 'txt2img_negative_token_button');
setupTokenCounting('img2img_prompt', 'img2img_token_counter', 'img2img_token_button');
setupTokenCounting('img2img_neg_prompt', 'img2img_negative_token_counter', 'img2img_negative_token_button');
}

View File

@ -1,7 +1,7 @@
// various functions for interaction with ui.py not large enough to warrant putting them in separate files // various functions for interaction with ui.py not large enough to warrant putting them in separate files
function set_theme(theme){ function set_theme(theme) {
gradioURL = window.location.href var gradioURL = window.location.href;
if (!gradioURL.includes('?__theme=')) { if (!gradioURL.includes('?__theme=')) {
window.location.replace(gradioURL + '?__theme=' + theme); window.location.replace(gradioURL + '?__theme=' + theme);
} }
@ -14,7 +14,7 @@ function all_gallery_buttons() {
if (elem.parentElement.offsetParent) { if (elem.parentElement.offsetParent) {
visibleGalleryButtons.push(elem); visibleGalleryButtons.push(elem);
} }
}) });
return visibleGalleryButtons; return visibleGalleryButtons;
} }
@ -25,31 +25,35 @@ function selected_gallery_button() {
if (elem.parentElement.offsetParent) { if (elem.parentElement.offsetParent) {
visibleCurrentButton = elem; visibleCurrentButton = elem;
} }
}) });
return visibleCurrentButton; return visibleCurrentButton;
} }
function selected_gallery_index(){ function selected_gallery_index() {
var buttons = all_gallery_buttons(); var buttons = all_gallery_buttons();
var button = selected_gallery_button(); var button = selected_gallery_button();
var result = -1 var result = -1;
buttons.forEach(function(v, i){ if(v==button) { result = i } }) buttons.forEach(function(v, i) {
if (v == button) {
result = i;
}
});
return result return result;
} }
function extract_image_from_gallery(gallery){ function extract_image_from_gallery(gallery) {
if (gallery.length == 0){ if (gallery.length == 0) {
return [null]; return [null];
} }
if (gallery.length == 1){ if (gallery.length == 1) {
return [gallery[0]]; return [gallery[0]];
} }
index = selected_gallery_index() var index = selected_gallery_index();
if (index < 0 || index >= gallery.length){ if (index < 0 || index >= gallery.length) {
// Use the first image in the gallery as the default // Use the first image in the gallery as the default
index = 0; index = 0;
} }
@ -57,199 +61,205 @@ function extract_image_from_gallery(gallery){
return [gallery[index]]; return [gallery[index]];
} }
function args_to_array(args){ window.args_to_array = Array.from; // Compatibility with e.g. extensions that may expect this to be around
res = []
for(var i=0;i<args.length;i++){
res.push(args[i])
}
return res
}
function switch_to_txt2img(){ function switch_to_txt2img() {
gradioApp().querySelector('#tabs').querySelectorAll('button')[0].click(); gradioApp().querySelector('#tabs').querySelectorAll('button')[0].click();
return args_to_array(arguments); return Array.from(arguments);
} }
function switch_to_img2img_tab(no){ function switch_to_img2img_tab(no) {
gradioApp().querySelector('#tabs').querySelectorAll('button')[1].click(); gradioApp().querySelector('#tabs').querySelectorAll('button')[1].click();
gradioApp().getElementById('mode_img2img').querySelectorAll('button')[no].click(); gradioApp().getElementById('mode_img2img').querySelectorAll('button')[no].click();
} }
function switch_to_img2img(){ function switch_to_img2img() {
switch_to_img2img_tab(0); switch_to_img2img_tab(0);
return args_to_array(arguments); return Array.from(arguments);
} }
function switch_to_sketch(){ function switch_to_sketch() {
switch_to_img2img_tab(1); switch_to_img2img_tab(1);
return args_to_array(arguments); return Array.from(arguments);
} }
function switch_to_inpaint(){ function switch_to_inpaint() {
switch_to_img2img_tab(2); switch_to_img2img_tab(2);
return args_to_array(arguments); return Array.from(arguments);
} }
function switch_to_inpaint_sketch(){ function switch_to_inpaint_sketch() {
switch_to_img2img_tab(3); switch_to_img2img_tab(3);
return args_to_array(arguments); return Array.from(arguments);
} }
function switch_to_inpaint(){ function switch_to_extras() {
gradioApp().querySelector('#tabs').querySelectorAll('button')[1].click();
gradioApp().getElementById('mode_img2img').querySelectorAll('button')[2].click();
return args_to_array(arguments);
}
function switch_to_extras(){
gradioApp().querySelector('#tabs').querySelectorAll('button')[2].click(); gradioApp().querySelector('#tabs').querySelectorAll('button')[2].click();
return args_to_array(arguments); return Array.from(arguments);
} }
function get_tab_index(tabId){ function get_tab_index(tabId) {
var res = 0 let buttons = gradioApp().getElementById(tabId).querySelector('div').querySelectorAll('button');
for (let i = 0; i < buttons.length; i++) {
gradioApp().getElementById(tabId).querySelector('div').querySelectorAll('button').forEach(function(button, i){ if (buttons[i].classList.contains('selected')) {
if(button.className.indexOf('selected') != -1) return i;
res = i
})
return res
}
function create_tab_index_args(tabId, args){
var res = []
for(var i=0; i<args.length; i++){
res.push(args[i])
} }
}
return 0;
}
res[0] = get_tab_index(tabId) function create_tab_index_args(tabId, args) {
var res = Array.from(args);
return res res[0] = get_tab_index(tabId);
return res;
} }
function get_img2img_tab_index() { function get_img2img_tab_index() {
let res = args_to_array(arguments) let res = Array.from(arguments);
res.splice(-2) res.splice(-2);
res[0] = get_tab_index('mode_img2img') res[0] = get_tab_index('mode_img2img');
return res return res;
} }
function create_submit_args(args){ function create_submit_args(args) {
res = [] var res = Array.from(args);
for(var i=0;i<args.length;i++){
res.push(args[i])
}
// As it is currently, txt2img and img2img send back the previous output args (txt2img_gallery, generation_info, html_info) whenever you generate a new image. // As it is currently, txt2img and img2img send back the previous output args (txt2img_gallery, generation_info, html_info) whenever you generate a new image.
// This can lead to uploading a huge gallery of previously generated images, which leads to an unnecessary delay between submitting and beginning to generate. // This can lead to uploading a huge gallery of previously generated images, which leads to an unnecessary delay between submitting and beginning to generate.
// I don't know why gradio is sending outputs along with inputs, but we can prevent sending the image gallery here, which seems to be an issue for some. // I don't know why gradio is sending outputs along with inputs, but we can prevent sending the image gallery here, which seems to be an issue for some.
// If gradio at some point stops sending outputs, this may break something // If gradio at some point stops sending outputs, this may break something
if(Array.isArray(res[res.length - 3])){ if (Array.isArray(res[res.length - 3])) {
res[res.length - 3] = null res[res.length - 3] = null;
} }
return res return res;
} }
function showSubmitButtons(tabname, show){ function showSubmitButtons(tabname, show) {
gradioApp().getElementById(tabname+'_interrupt').style.display = show ? "none" : "block" gradioApp().getElementById(tabname + '_interrupt').style.display = show ? "none" : "block";
gradioApp().getElementById(tabname+'_skip').style.display = show ? "none" : "block" gradioApp().getElementById(tabname + '_skip').style.display = show ? "none" : "block";
} }
function submit(){ function showRestoreProgressButton(tabname, show) {
rememberGallerySelection('txt2img_gallery') var button = gradioApp().getElementById(tabname + "_restore_progress");
showSubmitButtons('txt2img', false) if (!button) return;
var id = randomId() button.style.display = show ? "flex" : "none";
requestProgress(id, gradioApp().getElementById('txt2img_gallery_container'), gradioApp().getElementById('txt2img_gallery'), function(){
showSubmitButtons('txt2img', true)
})
var res = create_submit_args(arguments)
res[0] = id
return res
} }
function submit_img2img(){ function submit() {
rememberGallerySelection('img2img_gallery') showSubmitButtons('txt2img', false);
showSubmitButtons('img2img', false)
var id = randomId() var id = randomId();
requestProgress(id, gradioApp().getElementById('img2img_gallery_container'), gradioApp().getElementById('img2img_gallery'), function(){ localStorage.setItem("txt2img_task_id", id);
showSubmitButtons('img2img', true)
})
var res = create_submit_args(arguments) requestProgress(id, gradioApp().getElementById('txt2img_gallery_container'), gradioApp().getElementById('txt2img_gallery'), function() {
showSubmitButtons('txt2img', true);
localStorage.removeItem("txt2img_task_id");
showRestoreProgressButton('txt2img', false);
});
res[0] = id var res = create_submit_args(arguments);
res[1] = get_tab_index('mode_img2img')
return res res[0] = id;
return res;
} }
function modelmerger(){ function submit_img2img() {
var id = randomId() showSubmitButtons('img2img', false);
requestProgress(id, gradioApp().getElementById('modelmerger_results_panel'), null, function(){})
var res = create_submit_args(arguments) var id = randomId();
res[0] = id localStorage.setItem("img2img_task_id", id);
return res
requestProgress(id, gradioApp().getElementById('img2img_gallery_container'), gradioApp().getElementById('img2img_gallery'), function() {
showSubmitButtons('img2img', true);
localStorage.removeItem("img2img_task_id");
showRestoreProgressButton('img2img', false);
});
var res = create_submit_args(arguments);
res[0] = id;
res[1] = get_tab_index('mode_img2img');
return res;
}
function restoreProgressTxt2img() {
showRestoreProgressButton("txt2img", false);
var id = localStorage.getItem("txt2img_task_id");
id = localStorage.getItem("txt2img_task_id");
if (id) {
requestProgress(id, gradioApp().getElementById('txt2img_gallery_container'), gradioApp().getElementById('txt2img_gallery'), function() {
showSubmitButtons('txt2img', true);
}, null, 0);
}
return id;
}
function restoreProgressImg2img() {
showRestoreProgressButton("img2img", false);
var id = localStorage.getItem("img2img_task_id");
if (id) {
requestProgress(id, gradioApp().getElementById('img2img_gallery_container'), gradioApp().getElementById('img2img_gallery'), function() {
showSubmitButtons('img2img', true);
}, null, 0);
}
return id;
}
onUiLoaded(function() {
showRestoreProgressButton('txt2img', localStorage.getItem("txt2img_task_id"));
showRestoreProgressButton('img2img', localStorage.getItem("img2img_task_id"));
});
function modelmerger() {
var id = randomId();
requestProgress(id, gradioApp().getElementById('modelmerger_results_panel'), null, function() {});
var res = create_submit_args(arguments);
res[0] = id;
return res;
} }
function ask_for_style_name(_, prompt_text, negative_prompt_text) { function ask_for_style_name(_, prompt_text, negative_prompt_text) {
name_ = prompt('Style name:') var name_ = prompt('Style name:');
return [name_, prompt_text, negative_prompt_text] return [name_, prompt_text, negative_prompt_text];
} }
function confirm_clear_prompt(prompt, negative_prompt) { function confirm_clear_prompt(prompt, negative_prompt) {
if(confirm("Delete prompt?")) { if (confirm("Delete prompt?")) {
prompt = "" prompt = "";
negative_prompt = "" negative_prompt = "";
} }
return [prompt, negative_prompt] return [prompt, negative_prompt];
} }
promptTokecountUpdateFuncs = {} var opts = {};
onAfterUiUpdate(function() {
if (Object.keys(opts).length != 0) return;
function recalculatePromptTokens(name){ var json_elem = gradioApp().getElementById('settings_json');
if(promptTokecountUpdateFuncs[name]){ if (json_elem == null) return;
promptTokecountUpdateFuncs[name]()
}
}
function recalculate_prompts_txt2img(){ var textarea = json_elem.querySelector('textarea');
recalculatePromptTokens('txt2img_prompt') var jsdata = textarea.value;
recalculatePromptTokens('txt2img_neg_prompt') opts = JSON.parse(jsdata);
return args_to_array(arguments);
}
function recalculate_prompts_img2img(){ executeCallbacks(optionsChangedCallbacks); /*global optionsChangedCallbacks*/
recalculatePromptTokens('img2img_prompt')
recalculatePromptTokens('img2img_neg_prompt')
return args_to_array(arguments);
}
opts = {}
onUiUpdate(function(){
if(Object.keys(opts).length != 0) return;
json_elem = gradioApp().getElementById('settings_json')
if(json_elem == null) return;
var textarea = json_elem.querySelector('textarea')
var jsdata = textarea.value
opts = JSON.parse(jsdata)
executeCallbacks(optionsChangedCallbacks);
Object.defineProperty(textarea, 'value', { Object.defineProperty(textarea, 'value', {
set: function(newValue) { set: function(newValue) {
@ -258,7 +268,7 @@ onUiUpdate(function(){
valueProp.set.call(textarea, newValue); valueProp.set.call(textarea, newValue);
if (oldValue != newValue) { if (oldValue != newValue) {
opts = JSON.parse(textarea.value) opts = JSON.parse(textarea.value);
} }
executeCallbacks(optionsChangedCallbacks); executeCallbacks(optionsChangedCallbacks);
@ -269,95 +279,109 @@ onUiUpdate(function(){
} }
}); });
json_elem.parentElement.style.display="none" json_elem.parentElement.style.display = "none";
function registerTextarea(id, id_counter, id_button){ setupTokenCounters();
var prompt = gradioApp().getElementById(id)
var counter = gradioApp().getElementById(id_counter)
var textarea = gradioApp().querySelector("#" + id + " > label > textarea");
if(counter.parentElement == prompt.parentElement){ var show_all_pages = gradioApp().getElementById('settings_show_all_pages');
return var settings_tabs = gradioApp().querySelector('#settings div');
if (show_all_pages && settings_tabs) {
settings_tabs.appendChild(show_all_pages);
show_all_pages.onclick = function() {
gradioApp().querySelectorAll('#settings > div').forEach(function(elem) {
if (elem.id == "settings_tab_licenses") {
return;
} }
prompt.parentElement.insertBefore(counter, prompt)
prompt.parentElement.style.position = "relative"
promptTokecountUpdateFuncs[id] = function(){ update_token_counter(id_button); }
textarea.addEventListener("input", promptTokecountUpdateFuncs[id]);
}
registerTextarea('txt2img_prompt', 'txt2img_token_counter', 'txt2img_token_button')
registerTextarea('txt2img_neg_prompt', 'txt2img_negative_token_counter', 'txt2img_negative_token_button')
registerTextarea('img2img_prompt', 'img2img_token_counter', 'img2img_token_button')
registerTextarea('img2img_neg_prompt', 'img2img_negative_token_counter', 'img2img_negative_token_button')
show_all_pages = gradioApp().getElementById('settings_show_all_pages')
settings_tabs = gradioApp().querySelector('#settings div')
if(show_all_pages && settings_tabs){
settings_tabs.appendChild(show_all_pages)
show_all_pages.onclick = function(){
gradioApp().querySelectorAll('#settings > div').forEach(function(elem){
elem.style.display = "block"; elem.style.display = "block";
}) });
};
} }
} });
})
onOptionsChanged(function(){ onOptionsChanged(function() {
elem = gradioApp().getElementById('sd_checkpoint_hash') var elem = gradioApp().getElementById('sd_checkpoint_hash');
sd_checkpoint_hash = opts.sd_checkpoint_hash || "" var sd_checkpoint_hash = opts.sd_checkpoint_hash || "";
shorthash = sd_checkpoint_hash.substr(0,10) var shorthash = sd_checkpoint_hash.substring(0, 10);
if(elem && elem.textContent != shorthash){ if (elem && elem.textContent != shorthash) {
elem.textContent = shorthash elem.textContent = shorthash;
elem.title = sd_checkpoint_hash elem.title = sd_checkpoint_hash;
elem.href = "https://google.com/search?q=" + sd_checkpoint_hash elem.href = "https://google.com/search?q=" + sd_checkpoint_hash;
} }
}) });
let txt2img_textarea, img2img_textarea = undefined; let txt2img_textarea, img2img_textarea = undefined;
let wait_time = 800
let token_timeouts = {};
function update_txt2img_tokens(...args) { function restart_reload() {
update_token_counter("txt2img_token_button") document.body.innerHTML = '<h1 style="font-family:monospace;margin-top:20%;color:lightgray;text-align:center;">Reloading...</h1>';
if (args.length == 2)
return args[0]
return args;
}
function update_img2img_tokens(...args) { var requestPing = function() {
update_token_counter("img2img_token_button") requestGet("./internal/ping", {}, function(data) {
if (args.length == 2) location.reload();
return args[0] }, function() {
return args; setTimeout(requestPing, 500);
} });
};
function update_token_counter(button_id) { setTimeout(requestPing, 2000);
if (token_timeouts[button_id])
clearTimeout(token_timeouts[button_id]);
token_timeouts[button_id] = setTimeout(() => gradioApp().getElementById(button_id)?.click(), wait_time);
}
function restart_reload(){ return [];
document.body.innerHTML='<h1 style="font-family:monospace;margin-top:20%;color:lightgray;text-align:center;">Reloading...</h1>';
setTimeout(function(){location.reload()},2000)
return []
} }
// Simulate an `input` DOM event for Gradio Textbox component. Needed after you edit its contents in javascript, otherwise your edits // Simulate an `input` DOM event for Gradio Textbox component. Needed after you edit its contents in javascript, otherwise your edits
// will only visible on web page and not sent to python. // will only visible on web page and not sent to python.
function updateInput(target){ function updateInput(target) {
let e = new Event("input", { bubbles: true }) let e = new Event("input", {bubbles: true});
Object.defineProperty(e, "target", {value: target}) Object.defineProperty(e, "target", {value: target});
target.dispatchEvent(e); target.dispatchEvent(e);
} }
var desiredCheckpointName = null; var desiredCheckpointName = null;
function selectCheckpoint(name){ function selectCheckpoint(name) {
desiredCheckpointName = name; desiredCheckpointName = name;
gradioApp().getElementById('change_checkpoint').click() gradioApp().getElementById('change_checkpoint').click();
}
function currentImg2imgSourceResolution(w, h, scaleBy) {
var img = gradioApp().querySelector('#mode_img2img > div[style="display: block;"] img');
return img ? [img.naturalWidth, img.naturalHeight, scaleBy] : [0, 0, scaleBy];
}
function updateImg2imgResizeToTextAfterChangingImage() {
// At the time this is called from gradio, the image has no yet been replaced.
// There may be a better solution, but this is simple and straightforward so I'm going with it.
setTimeout(function() {
gradioApp().getElementById('img2img_update_resize_to').click();
}, 500);
return [];
}
function setRandomSeed(elem_id) {
var input = gradioApp().querySelector("#" + elem_id + " input");
if (!input) return [];
input.value = "-1";
updateInput(input);
return [];
}
function switchWidthHeight(tabname) {
var width = gradioApp().querySelector("#" + tabname + "_width input[type=number]");
var height = gradioApp().querySelector("#" + tabname + "_height input[type=number]");
if (!width || !height) return [];
var tmp = width.value;
width.value = height.value;
height.value = tmp;
updateInput(width);
updateInput(height);
return [];
} }

View File

@ -0,0 +1,62 @@
// various hints and extra info for the settings tab
var settingsHintsSetup = false;
onOptionsChanged(function() {
if (settingsHintsSetup) return;
settingsHintsSetup = true;
gradioApp().querySelectorAll('#settings [id^=setting_]').forEach(function(div) {
var name = div.id.substr(8);
var commentBefore = opts._comments_before[name];
var commentAfter = opts._comments_after[name];
if (!commentBefore && !commentAfter) return;
var span = null;
if (div.classList.contains('gradio-checkbox')) span = div.querySelector('label span');
else if (div.classList.contains('gradio-checkboxgroup')) span = div.querySelector('span').firstChild;
else if (div.classList.contains('gradio-radio')) span = div.querySelector('span').firstChild;
else span = div.querySelector('label span').firstChild;
if (!span) return;
if (commentBefore) {
var comment = document.createElement('DIV');
comment.className = 'settings-comment';
comment.innerHTML = commentBefore;
span.parentElement.insertBefore(document.createTextNode('\xa0'), span);
span.parentElement.insertBefore(comment, span);
span.parentElement.insertBefore(document.createTextNode('\xa0'), span);
}
if (commentAfter) {
comment = document.createElement('DIV');
comment.className = 'settings-comment';
comment.innerHTML = commentAfter;
span.parentElement.insertBefore(comment, span.nextSibling);
span.parentElement.insertBefore(document.createTextNode('\xa0'), span.nextSibling);
}
});
});
function settingsHintsShowQuicksettings() {
requestGet("./internal/quicksettings-hint", {}, function(data) {
var table = document.createElement('table');
table.className = 'popup-table';
data.forEach(function(obj) {
var tr = document.createElement('tr');
var td = document.createElement('td');
td.textContent = obj.name;
tr.appendChild(td);
td = document.createElement('td');
td.textContent = obj.label;
tr.appendChild(td);
table.appendChild(tr);
});
popup(table);
});
}

371
launch.py
View File

@ -1,356 +1,39 @@
# this scripts installs necessary requirements and launches main program in webui.py from modules import launch_utils
import subprocess
import os
import sys
import importlib.util
import shlex
import platform
import json
from modules import cmd_args
from modules.paths_internal import script_path, extensions_dir
commandline_args = os.environ.get('COMMANDLINE_ARGS', "") args = launch_utils.args
sys.argv += shlex.split(commandline_args) python = launch_utils.python
git = launch_utils.git
index_url = launch_utils.index_url
dir_repos = launch_utils.dir_repos
args, _ = cmd_args.parser.parse_known_args() commit_hash = launch_utils.commit_hash
git_tag = launch_utils.git_tag
python = sys.executable run = launch_utils.run
git = os.environ.get('GIT', "git") is_installed = launch_utils.is_installed
index_url = os.environ.get('INDEX_URL', "") repo_dir = launch_utils.repo_dir
stored_commit_hash = None
skip_install = False
dir_repos = "repositories"
if 'GRADIO_ANALYTICS_ENABLED' not in os.environ: run_pip = launch_utils.run_pip
os.environ['GRADIO_ANALYTICS_ENABLED'] = 'False' check_run_python = launch_utils.check_run_python
git_clone = launch_utils.git_clone
git_pull_recursive = launch_utils.git_pull_recursive
list_extensions = launch_utils.list_extensions
run_extension_installer = launch_utils.run_extension_installer
prepare_environment = launch_utils.prepare_environment
configure_for_tests = launch_utils.configure_for_tests
start = launch_utils.start
def check_python_version(): def main():
is_windows = platform.system() == "Windows" if not args.skip_prepare_environment:
major = sys.version_info.major prepare_environment()
minor = sys.version_info.minor
micro = sys.version_info.micro
if is_windows: if args.test_server:
supported_minors = [10] configure_for_tests()
else:
supported_minors = [7, 8, 9, 10, 11]
if not (major == 3 and minor in supported_minors): start()
import modules.errors
modules.errors.print_error_explanation(f"""
INCOMPATIBLE PYTHON VERSION
This program is tested with 3.10.6 Python, but you have {major}.{minor}.{micro}.
If you encounter an error with "RuntimeError: Couldn't install torch." message,
or any other error regarding unsuccessful package (library) installation,
please downgrade (or upgrade) to the latest version of 3.10 Python
and delete current Python and "venv" folder in WebUI's directory.
You can download 3.10 Python from here: https://www.python.org/downloads/release/python-3109/
{"Alternatively, use a binary release of WebUI: https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases" if is_windows else ""}
Use --skip-python-version-check to suppress this warning.
""")
def commit_hash():
global stored_commit_hash
if stored_commit_hash is not None:
return stored_commit_hash
try:
stored_commit_hash = run(f"{git} rev-parse HEAD").strip()
except Exception:
stored_commit_hash = "<none>"
return stored_commit_hash
def run(command, desc=None, errdesc=None, custom_env=None, live=False):
if desc is not None:
print(desc)
if live:
result = subprocess.run(command, shell=True, env=os.environ if custom_env is None else custom_env)
if result.returncode != 0:
raise RuntimeError(f"""{errdesc or 'Error running command'}.
Command: {command}
Error code: {result.returncode}""")
return ""
result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, env=os.environ if custom_env is None else custom_env)
if result.returncode != 0:
message = f"""{errdesc or 'Error running command'}.
Command: {command}
Error code: {result.returncode}
stdout: {result.stdout.decode(encoding="utf8", errors="ignore") if len(result.stdout)>0 else '<empty>'}
stderr: {result.stderr.decode(encoding="utf8", errors="ignore") if len(result.stderr)>0 else '<empty>'}
"""
raise RuntimeError(message)
return result.stdout.decode(encoding="utf8", errors="ignore")
def check_run(command):
result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
return result.returncode == 0
def is_installed(package):
try:
spec = importlib.util.find_spec(package)
except ModuleNotFoundError:
return False
return spec is not None
def repo_dir(name):
return os.path.join(script_path, dir_repos, name)
def run_python(code, desc=None, errdesc=None):
return run(f'"{python}" -c "{code}"', desc, errdesc)
def run_pip(args, desc=None):
if skip_install:
return
index_url_line = f' --index-url {index_url}' if index_url != '' else ''
return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
def check_run_python(code):
return check_run(f'"{python}" -c "{code}"')
def git_clone(url, dir, name, commithash=None):
# TODO clone into temporary dir and move if successful
if os.path.exists(dir):
if commithash is None:
return
current_hash = run(f'"{git}" -C "{dir}" rev-parse HEAD', None, f"Couldn't determine {name}'s hash: {commithash}").strip()
if current_hash == commithash:
return
run(f'"{git}" -C "{dir}" fetch', f"Fetching updates for {name}...", f"Couldn't fetch {name}")
run(f'"{git}" -C "{dir}" checkout {commithash}', f"Checking out commit for {name} with hash: {commithash}...", f"Couldn't checkout commit {commithash} for {name}")
return
run(f'"{git}" clone "{url}" "{dir}"', f"Cloning {name} into {dir}...", f"Couldn't clone {name}")
if commithash is not None:
run(f'"{git}" -C "{dir}" checkout {commithash}', None, "Couldn't checkout {name}'s hash: {commithash}")
def git_pull_recursive(dir):
for subdir, _, _ in os.walk(dir):
if os.path.exists(os.path.join(subdir, '.git')):
try:
output = subprocess.check_output([git, '-C', subdir, 'pull', '--autostash'])
print(f"Pulled changes for repository in '{subdir}':\n{output.decode('utf-8').strip()}\n")
except subprocess.CalledProcessError as e:
print(f"Couldn't perform 'git pull' on repository in '{subdir}':\n{e.output.decode('utf-8').strip()}\n")
def version_check(commit):
try:
import requests
commits = requests.get('https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/branches/master').json()
if commit != "<none>" and commits['commit']['sha'] != commit:
print("--------------------------------------------------------")
print("| You are not up to date with the most recent release. |")
print("| Consider running `git pull` to update. |")
print("--------------------------------------------------------")
elif commits['commit']['sha'] == commit:
print("You are up to date with the most recent release.")
else:
print("Not a git clone, can't perform version check.")
except Exception as e:
print("version check failed", e)
def run_extension_installer(extension_dir):
path_installer = os.path.join(extension_dir, "install.py")
if not os.path.isfile(path_installer):
return
try:
env = os.environ.copy()
env['PYTHONPATH'] = os.path.abspath(".")
print(run(f'"{python}" "{path_installer}"', errdesc=f"Error running install.py for extension {extension_dir}", custom_env=env))
except Exception as e:
print(e, file=sys.stderr)
def list_extensions(settings_file):
settings = {}
try:
if os.path.isfile(settings_file):
with open(settings_file, "r", encoding="utf8") as file:
settings = json.load(file)
except Exception as e:
print(e, file=sys.stderr)
disabled_extensions = set(settings.get('disabled_extensions', []))
disable_all_extensions = settings.get('disable_all_extensions', 'none')
if disable_all_extensions != 'none':
return []
return [x for x in os.listdir(extensions_dir) if x not in disabled_extensions]
def run_extensions_installers(settings_file):
if not os.path.isdir(extensions_dir):
return
for dirname_extension in list_extensions(settings_file):
run_extension_installer(os.path.join(extensions_dir, dirname_extension))
def prepare_environment():
global skip_install
torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117")
requirements_file = os.environ.get('REQS_FILE', "requirements_versions.txt")
xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.16rc425')
gfpgan_package = os.environ.get('GFPGAN_PACKAGE', "git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379")
clip_package = os.environ.get('CLIP_PACKAGE', "git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1")
openclip_package = os.environ.get('OPENCLIP_PACKAGE', "git+https://github.com/mlfoundations/open_clip.git@bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b")
stable_diffusion_repo = os.environ.get('STABLE_DIFFUSION_REPO', "https://github.com/Stability-AI/stablediffusion.git")
taming_transformers_repo = os.environ.get('TAMING_TRANSFORMERS_REPO', "https://github.com/CompVis/taming-transformers.git")
k_diffusion_repo = os.environ.get('K_DIFFUSION_REPO', 'https://github.com/crowsonkb/k-diffusion.git')
codeformer_repo = os.environ.get('CODEFORMER_REPO', 'https://github.com/sczhou/CodeFormer.git')
blip_repo = os.environ.get('BLIP_REPO', 'https://github.com/salesforce/BLIP.git')
stable_diffusion_commit_hash = os.environ.get('STABLE_DIFFUSION_COMMIT_HASH', "cf1d67a6fd5ea1aa600c4df58e5b47da45f6bdbf")
taming_transformers_commit_hash = os.environ.get('TAMING_TRANSFORMERS_COMMIT_HASH', "24268930bf1dce879235a7fddd0b2355b84d7ea6")
k_diffusion_commit_hash = os.environ.get('K_DIFFUSION_COMMIT_HASH', "5b3af030dd83e0297272d861c19477735d0317ec")
codeformer_commit_hash = os.environ.get('CODEFORMER_COMMIT_HASH', "c5b4593074ba6214284d6acd5f1719b6c5d739af")
blip_commit_hash = os.environ.get('BLIP_COMMIT_HASH', "48211a1594f1321b00f14c9f7a5b4813144b2fb9")
if not args.skip_python_version_check:
check_python_version()
commit = commit_hash()
print(f"Python {sys.version}")
print(f"Commit hash: {commit}")
if args.reinstall_torch or not is_installed("torch") or not is_installed("torchvision"):
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
if not args.skip_torch_cuda_test:
run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'")
if not is_installed("gfpgan"):
run_pip(f"install {gfpgan_package}", "gfpgan")
if not is_installed("clip"):
run_pip(f"install {clip_package}", "clip")
if not is_installed("open_clip"):
run_pip(f"install {openclip_package}", "open_clip")
if (not is_installed("xformers") or args.reinstall_xformers) and args.xformers:
if platform.system() == "Windows":
if platform.python_version().startswith("3.10"):
run_pip(f"install -U -I --no-deps {xformers_package}", "xformers")
else:
print("Installation of xformers is not supported in this version of Python.")
print("You can also check this and build manually: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers#building-xformers-on-windows-by-duckness")
if not is_installed("xformers"):
exit(0)
elif platform.system() == "Linux":
run_pip(f"install {xformers_package}", "xformers")
if not is_installed("pyngrok") and args.ngrok:
run_pip("install pyngrok", "ngrok")
os.makedirs(os.path.join(script_path, dir_repos), exist_ok=True)
git_clone(stable_diffusion_repo, repo_dir('stable-diffusion-stability-ai'), "Stable Diffusion", stable_diffusion_commit_hash)
git_clone(taming_transformers_repo, repo_dir('taming-transformers'), "Taming Transformers", taming_transformers_commit_hash)
git_clone(k_diffusion_repo, repo_dir('k-diffusion'), "K-diffusion", k_diffusion_commit_hash)
git_clone(codeformer_repo, repo_dir('CodeFormer'), "CodeFormer", codeformer_commit_hash)
git_clone(blip_repo, repo_dir('BLIP'), "BLIP", blip_commit_hash)
if not is_installed("lpips"):
run_pip(f"install -r \"{os.path.join(repo_dir('CodeFormer'), 'requirements.txt')}\"", "requirements for CodeFormer")
if not os.path.isfile(requirements_file):
requirements_file = os.path.join(script_path, requirements_file)
run_pip(f"install -r \"{requirements_file}\"", "requirements for Web UI")
run_extensions_installers(settings_file=args.ui_settings_file)
if args.update_check:
version_check(commit)
if args.update_all_extensions:
git_pull_recursive(extensions_dir)
if "--exit" in sys.argv:
print("Exiting because of --exit argument")
exit(0)
if args.tests and not args.no_tests:
exitcode = tests(args.tests)
exit(exitcode)
def tests(test_dir):
if "--api" not in sys.argv:
sys.argv.append("--api")
if "--ckpt" not in sys.argv:
sys.argv.append("--ckpt")
sys.argv.append(os.path.join(script_path, "test/test_files/empty.pt"))
if "--skip-torch-cuda-test" not in sys.argv:
sys.argv.append("--skip-torch-cuda-test")
if "--disable-nan-check" not in sys.argv:
sys.argv.append("--disable-nan-check")
if "--no-tests" not in sys.argv:
sys.argv.append("--no-tests")
print(f"Launching Web UI in another process for testing with arguments: {' '.join(sys.argv[1:])}")
os.environ['COMMANDLINE_ARGS'] = ""
with open(os.path.join(script_path, 'test/stdout.txt'), "w", encoding="utf8") as stdout, open(os.path.join(script_path, 'test/stderr.txt'), "w", encoding="utf8") as stderr:
proc = subprocess.Popen([sys.executable, *sys.argv], stdout=stdout, stderr=stderr)
import test.server_poll
exitcode = test.server_poll.run_tests(proc, test_dir)
print(f"Stopping Web UI process with id {proc.pid}")
proc.kill()
return exitcode
def start():
print(f"Launching {'API server' if '--nowebui' in sys.argv else 'Web UI'} with arguments: {' '.join(sys.argv[1:])}")
import webui
if '--nowebui' in sys.argv:
webui.api_only()
else:
webui.webui()
if __name__ == "__main__": if __name__ == "__main__":
prepare_environment() main()
start()

BIN
modules/Roboto-Regular.ttf Normal file

Binary file not shown.

View File

@ -1,12 +1,12 @@
import base64 import base64
import io import io
import os
import time import time
import datetime import datetime
import uvicorn import uvicorn
import gradio as gr import gradio as gr
from threading import Lock from threading import Lock
from io import BytesIO from io import BytesIO
from gradio.processing_utils import decode_base64_to_file
from fastapi import APIRouter, Depends, FastAPI, Request, Response from fastapi import APIRouter, Depends, FastAPI, Request, Response
from fastapi.security import HTTPBasic, HTTPBasicCredentials from fastapi.security import HTTPBasic, HTTPBasicCredentials
from fastapi.exceptions import HTTPException from fastapi.exceptions import HTTPException
@ -15,32 +15,31 @@ from fastapi.encoders import jsonable_encoder
from secrets import compare_digest from secrets import compare_digest
import modules.shared as shared import modules.shared as shared
from modules import sd_samplers, deepbooru, sd_hijack, images, scripts, ui, postprocessing from modules import sd_samplers, deepbooru, sd_hijack, images, scripts, ui, postprocessing, errors, restart
from modules.api.models import * from modules.api import models
from modules.shared import opts
from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img, process_images from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img, process_images
from modules.textual_inversion.textual_inversion import create_embedding, train_embedding from modules.textual_inversion.textual_inversion import create_embedding, train_embedding
from modules.textual_inversion.preprocess import preprocess from modules.textual_inversion.preprocess import preprocess
from modules.hypernetworks.hypernetwork import create_hypernetwork, train_hypernetwork from modules.hypernetworks.hypernetwork import create_hypernetwork, train_hypernetwork
from PIL import PngImagePlugin,Image from PIL import PngImagePlugin,Image
from modules.sd_models import checkpoints_list, unload_model_weights, reload_model_weights from modules.sd_models import checkpoints_list, unload_model_weights, reload_model_weights, checkpoint_aliases
from modules.sd_vae import vae_dict
from modules.sd_models_config import find_checkpoint_config_near_filename from modules.sd_models_config import find_checkpoint_config_near_filename
from modules.realesrgan_model import get_realesrgan_models from modules.realesrgan_model import get_realesrgan_models
from modules import devices from modules import devices
from typing import List from typing import Dict, List, Any
import piexif import piexif
import piexif.helper import piexif.helper
from contextlib import closing
def upscaler_to_index(name: str):
try:
return [x.name.lower() for x in shared.sd_upscalers].index(name.lower())
except:
raise HTTPException(status_code=400, detail=f"Invalid upscaler, needs to be one of these: {' , '.join([x.name for x in sd_upscalers])}")
def script_name_to_index(name, scripts): def script_name_to_index(name, scripts):
try: try:
return [script.title().lower() for script in scripts].index(name.lower()) return [script.title().lower() for script in scripts].index(name.lower())
except: except Exception as e:
raise HTTPException(status_code=422, detail=f"Script '{name}' not found") raise HTTPException(status_code=422, detail=f"Script '{name}' not found") from e
def validate_sampler_name(name): def validate_sampler_name(name):
config = sd_samplers.all_samplers_map.get(name, None) config = sd_samplers.all_samplers_map.get(name, None)
@ -49,20 +48,23 @@ def validate_sampler_name(name):
return name return name
def setUpscalers(req: dict): def setUpscalers(req: dict):
reqDict = vars(req) reqDict = vars(req)
reqDict['extras_upscaler_1'] = reqDict.pop('upscaler_1', None) reqDict['extras_upscaler_1'] = reqDict.pop('upscaler_1', None)
reqDict['extras_upscaler_2'] = reqDict.pop('upscaler_2', None) reqDict['extras_upscaler_2'] = reqDict.pop('upscaler_2', None)
return reqDict return reqDict
def decode_base64_to_image(encoding): def decode_base64_to_image(encoding):
if encoding.startswith("data:image/"): if encoding.startswith("data:image/"):
encoding = encoding.split(";")[1].split(",")[1] encoding = encoding.split(";")[1].split(",")[1]
try: try:
image = Image.open(BytesIO(base64.b64decode(encoding))) image = Image.open(BytesIO(base64.b64decode(encoding)))
return image return image
except Exception as err: except Exception as e:
raise HTTPException(status_code=500, detail="Invalid encoded image") raise HTTPException(status_code=500, detail="Invalid encoded image") from e
def encode_pil_to_base64(image): def encode_pil_to_base64(image):
with io.BytesIO() as output_bytes: with io.BytesIO() as output_bytes:
@ -77,6 +79,8 @@ def encode_pil_to_base64(image):
image.save(output_bytes, format="PNG", pnginfo=(metadata if use_metadata else None), quality=opts.jpeg_quality) image.save(output_bytes, format="PNG", pnginfo=(metadata if use_metadata else None), quality=opts.jpeg_quality)
elif opts.samples_format.lower() in ("jpg", "jpeg", "webp"): elif opts.samples_format.lower() in ("jpg", "jpeg", "webp"):
if image.mode == "RGBA":
image = image.convert("RGB")
parameters = image.info.get('parameters', None) parameters = image.info.get('parameters', None)
exif_bytes = piexif.dump({ exif_bytes = piexif.dump({
"Exif": { piexif.ExifIFD.UserComment: piexif.helper.UserComment.dump(parameters or "", encoding="unicode") } "Exif": { piexif.ExifIFD.UserComment: piexif.helper.UserComment.dump(parameters or "", encoding="unicode") }
@ -93,16 +97,18 @@ def encode_pil_to_base64(image):
return base64.b64encode(bytes_data) return base64.b64encode(bytes_data)
def api_middleware(app: FastAPI): def api_middleware(app: FastAPI):
rich_available = True rich_available = False
try: try:
if os.environ.get('WEBUI_RICH_EXCEPTIONS', None) is not None:
import anyio # importing just so it can be placed on silent list import anyio # importing just so it can be placed on silent list
import starlette # importing just so it can be placed on silent list import starlette # importing just so it can be placed on silent list
from rich.console import Console from rich.console import Console
console = Console() console = Console()
except: rich_available = True
import traceback except Exception:
rich_available = False pass
@app.middleware("http") @app.middleware("http")
async def log_and_time(req: Request, call_next): async def log_and_time(req: Request, call_next):
@ -113,14 +119,14 @@ def api_middleware(app: FastAPI):
endpoint = req.scope.get('path', 'err') endpoint = req.scope.get('path', 'err')
if shared.cmd_opts.api_log and endpoint.startswith('/sdapi'): if shared.cmd_opts.api_log and endpoint.startswith('/sdapi'):
print('API {t} {code} {prot}/{ver} {method} {endpoint} {cli} {duration}'.format( print('API {t} {code} {prot}/{ver} {method} {endpoint} {cli} {duration}'.format(
t = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S.%f"), t=datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S.%f"),
code = res.status_code, code=res.status_code,
ver = req.scope.get('http_version', '0.0'), ver=req.scope.get('http_version', '0.0'),
cli = req.scope.get('client', ('0:0.0.0', 0))[0], cli=req.scope.get('client', ('0:0.0.0', 0))[0],
prot = req.scope.get('scheme', 'err'), prot=req.scope.get('scheme', 'err'),
method = req.scope.get('method', 'err'), method=req.scope.get('method', 'err'),
endpoint = endpoint, endpoint=endpoint,
duration = duration, duration=duration,
)) ))
return res return res
@ -131,12 +137,13 @@ def api_middleware(app: FastAPI):
"body": vars(e).get('body', ''), "body": vars(e).get('body', ''),
"errors": str(e), "errors": str(e),
} }
print(f"API error: {request.method}: {request.url} {err}")
if not isinstance(e, HTTPException): # do not print backtrace on known httpexceptions if not isinstance(e, HTTPException): # do not print backtrace on known httpexceptions
message = f"API error: {request.method}: {request.url} {err}"
if rich_available: if rich_available:
print(message)
console.print_exception(show_locals=True, max_frames=2, extra_lines=1, suppress=[anyio, starlette], word_wrap=False, width=min([console.width, 200])) console.print_exception(show_locals=True, max_frames=2, extra_lines=1, suppress=[anyio, starlette], word_wrap=False, width=min([console.width, 200]))
else: else:
traceback.print_exc() errors.report(message, exc_info=True)
return JSONResponse(status_code=vars(e).get('status_code', 500), content=jsonable_encoder(err)) return JSONResponse(status_code=vars(e).get('status_code', 500), content=jsonable_encoder(err))
@app.middleware("http") @app.middleware("http")
@ -158,7 +165,7 @@ def api_middleware(app: FastAPI):
class Api: class Api:
def __init__(self, app: FastAPI, queue_lock: Lock): def __init__(self, app: FastAPI, queue_lock: Lock):
if shared.cmd_opts.api_auth: if shared.cmd_opts.api_auth:
self.credentials = dict() self.credentials = {}
for auth in shared.cmd_opts.api_auth.split(","): for auth in shared.cmd_opts.api_auth.split(","):
user, password = auth.split(":") user, password = auth.split(":")
self.credentials[user] = password self.credentials[user] = password
@ -167,36 +174,44 @@ class Api:
self.app = app self.app = app
self.queue_lock = queue_lock self.queue_lock = queue_lock
api_middleware(self.app) api_middleware(self.app)
self.add_api_route("/sdapi/v1/txt2img", self.text2imgapi, methods=["POST"], response_model=TextToImageResponse) self.add_api_route("/sdapi/v1/txt2img", self.text2imgapi, methods=["POST"], response_model=models.TextToImageResponse)
self.add_api_route("/sdapi/v1/img2img", self.img2imgapi, methods=["POST"], response_model=ImageToImageResponse) self.add_api_route("/sdapi/v1/img2img", self.img2imgapi, methods=["POST"], response_model=models.ImageToImageResponse)
self.add_api_route("/sdapi/v1/extra-single-image", self.extras_single_image_api, methods=["POST"], response_model=ExtrasSingleImageResponse) self.add_api_route("/sdapi/v1/extra-single-image", self.extras_single_image_api, methods=["POST"], response_model=models.ExtrasSingleImageResponse)
self.add_api_route("/sdapi/v1/extra-batch-images", self.extras_batch_images_api, methods=["POST"], response_model=ExtrasBatchImagesResponse) self.add_api_route("/sdapi/v1/extra-batch-images", self.extras_batch_images_api, methods=["POST"], response_model=models.ExtrasBatchImagesResponse)
self.add_api_route("/sdapi/v1/png-info", self.pnginfoapi, methods=["POST"], response_model=PNGInfoResponse) self.add_api_route("/sdapi/v1/png-info", self.pnginfoapi, methods=["POST"], response_model=models.PNGInfoResponse)
self.add_api_route("/sdapi/v1/progress", self.progressapi, methods=["GET"], response_model=ProgressResponse) self.add_api_route("/sdapi/v1/progress", self.progressapi, methods=["GET"], response_model=models.ProgressResponse)
self.add_api_route("/sdapi/v1/interrogate", self.interrogateapi, methods=["POST"]) self.add_api_route("/sdapi/v1/interrogate", self.interrogateapi, methods=["POST"])
self.add_api_route("/sdapi/v1/interrupt", self.interruptapi, methods=["POST"]) self.add_api_route("/sdapi/v1/interrupt", self.interruptapi, methods=["POST"])
self.add_api_route("/sdapi/v1/skip", self.skip, methods=["POST"]) self.add_api_route("/sdapi/v1/skip", self.skip, methods=["POST"])
self.add_api_route("/sdapi/v1/options", self.get_config, methods=["GET"], response_model=OptionsModel) self.add_api_route("/sdapi/v1/options", self.get_config, methods=["GET"], response_model=models.OptionsModel)
self.add_api_route("/sdapi/v1/options", self.set_config, methods=["POST"]) self.add_api_route("/sdapi/v1/options", self.set_config, methods=["POST"])
self.add_api_route("/sdapi/v1/cmd-flags", self.get_cmd_flags, methods=["GET"], response_model=FlagsModel) self.add_api_route("/sdapi/v1/cmd-flags", self.get_cmd_flags, methods=["GET"], response_model=models.FlagsModel)
self.add_api_route("/sdapi/v1/samplers", self.get_samplers, methods=["GET"], response_model=List[SamplerItem]) self.add_api_route("/sdapi/v1/samplers", self.get_samplers, methods=["GET"], response_model=List[models.SamplerItem])
self.add_api_route("/sdapi/v1/upscalers", self.get_upscalers, methods=["GET"], response_model=List[UpscalerItem]) self.add_api_route("/sdapi/v1/upscalers", self.get_upscalers, methods=["GET"], response_model=List[models.UpscalerItem])
self.add_api_route("/sdapi/v1/sd-models", self.get_sd_models, methods=["GET"], response_model=List[SDModelItem]) self.add_api_route("/sdapi/v1/latent-upscale-modes", self.get_latent_upscale_modes, methods=["GET"], response_model=List[models.LatentUpscalerModeItem])
self.add_api_route("/sdapi/v1/hypernetworks", self.get_hypernetworks, methods=["GET"], response_model=List[HypernetworkItem]) self.add_api_route("/sdapi/v1/sd-models", self.get_sd_models, methods=["GET"], response_model=List[models.SDModelItem])
self.add_api_route("/sdapi/v1/face-restorers", self.get_face_restorers, methods=["GET"], response_model=List[FaceRestorerItem]) self.add_api_route("/sdapi/v1/sd-vae", self.get_sd_vaes, methods=["GET"], response_model=List[models.SDVaeItem])
self.add_api_route("/sdapi/v1/realesrgan-models", self.get_realesrgan_models, methods=["GET"], response_model=List[RealesrganItem]) self.add_api_route("/sdapi/v1/hypernetworks", self.get_hypernetworks, methods=["GET"], response_model=List[models.HypernetworkItem])
self.add_api_route("/sdapi/v1/prompt-styles", self.get_prompt_styles, methods=["GET"], response_model=List[PromptStyleItem]) self.add_api_route("/sdapi/v1/face-restorers", self.get_face_restorers, methods=["GET"], response_model=List[models.FaceRestorerItem])
self.add_api_route("/sdapi/v1/embeddings", self.get_embeddings, methods=["GET"], response_model=EmbeddingsResponse) self.add_api_route("/sdapi/v1/realesrgan-models", self.get_realesrgan_models, methods=["GET"], response_model=List[models.RealesrganItem])
self.add_api_route("/sdapi/v1/prompt-styles", self.get_prompt_styles, methods=["GET"], response_model=List[models.PromptStyleItem])
self.add_api_route("/sdapi/v1/embeddings", self.get_embeddings, methods=["GET"], response_model=models.EmbeddingsResponse)
self.add_api_route("/sdapi/v1/refresh-checkpoints", self.refresh_checkpoints, methods=["POST"]) self.add_api_route("/sdapi/v1/refresh-checkpoints", self.refresh_checkpoints, methods=["POST"])
self.add_api_route("/sdapi/v1/create/embedding", self.create_embedding, methods=["POST"], response_model=CreateResponse) self.add_api_route("/sdapi/v1/create/embedding", self.create_embedding, methods=["POST"], response_model=models.CreateResponse)
self.add_api_route("/sdapi/v1/create/hypernetwork", self.create_hypernetwork, methods=["POST"], response_model=CreateResponse) self.add_api_route("/sdapi/v1/create/hypernetwork", self.create_hypernetwork, methods=["POST"], response_model=models.CreateResponse)
self.add_api_route("/sdapi/v1/preprocess", self.preprocess, methods=["POST"], response_model=PreprocessResponse) self.add_api_route("/sdapi/v1/preprocess", self.preprocess, methods=["POST"], response_model=models.PreprocessResponse)
self.add_api_route("/sdapi/v1/train/embedding", self.train_embedding, methods=["POST"], response_model=TrainResponse) self.add_api_route("/sdapi/v1/train/embedding", self.train_embedding, methods=["POST"], response_model=models.TrainResponse)
self.add_api_route("/sdapi/v1/train/hypernetwork", self.train_hypernetwork, methods=["POST"], response_model=TrainResponse) self.add_api_route("/sdapi/v1/train/hypernetwork", self.train_hypernetwork, methods=["POST"], response_model=models.TrainResponse)
self.add_api_route("/sdapi/v1/memory", self.get_memory, methods=["GET"], response_model=MemoryResponse) self.add_api_route("/sdapi/v1/memory", self.get_memory, methods=["GET"], response_model=models.MemoryResponse)
self.add_api_route("/sdapi/v1/unload-checkpoint", self.unloadapi, methods=["POST"]) self.add_api_route("/sdapi/v1/unload-checkpoint", self.unloadapi, methods=["POST"])
self.add_api_route("/sdapi/v1/reload-checkpoint", self.reloadapi, methods=["POST"]) self.add_api_route("/sdapi/v1/reload-checkpoint", self.reloadapi, methods=["POST"])
self.add_api_route("/sdapi/v1/scripts", self.get_scripts_list, methods=["GET"], response_model=ScriptsList) self.add_api_route("/sdapi/v1/scripts", self.get_scripts_list, methods=["GET"], response_model=models.ScriptsList)
self.add_api_route("/sdapi/v1/script-info", self.get_script_info, methods=["GET"], response_model=List[models.ScriptInfo])
if shared.cmd_opts.api_server_stop:
self.add_api_route("/sdapi/v1/server-kill", self.kill_webui, methods=["POST"])
self.add_api_route("/sdapi/v1/server-restart", self.restart_webui, methods=["POST"])
self.add_api_route("/sdapi/v1/server-stop", self.stop_webui, methods=["POST"])
self.default_script_arg_txt2img = [] self.default_script_arg_txt2img = []
self.default_script_arg_img2img = [] self.default_script_arg_img2img = []
@ -222,10 +237,18 @@ class Api:
return script, script_idx return script, script_idx
def get_scripts_list(self): def get_scripts_list(self):
t2ilist = [str(title.lower()) for title in scripts.scripts_txt2img.titles] t2ilist = [script.name for script in scripts.scripts_txt2img.scripts if script.name is not None]
i2ilist = [str(title.lower()) for title in scripts.scripts_img2img.titles] i2ilist = [script.name for script in scripts.scripts_img2img.scripts if script.name is not None]
return ScriptsList(txt2img = t2ilist, img2img = i2ilist) return models.ScriptsList(txt2img=t2ilist, img2img=i2ilist)
def get_script_info(self):
res = []
for script_list in [scripts.scripts_txt2img.scripts, scripts.scripts_img2img.scripts]:
res += [script.api_info for script in script_list if script.api_info is not None]
return res
def get_script(self, script_name, script_runner): def get_script(self, script_name, script_runner):
if script_name is None or script_name == "": if script_name is None or script_name == "":
@ -262,20 +285,22 @@ class Api:
script_args[0] = selectable_idx + 1 script_args[0] = selectable_idx + 1
# Now check for always on scripts # Now check for always on scripts
if request.alwayson_scripts and (len(request.alwayson_scripts) > 0): if request.alwayson_scripts:
for alwayson_script_name in request.alwayson_scripts.keys(): for alwayson_script_name in request.alwayson_scripts.keys():
alwayson_script = self.get_script(alwayson_script_name, script_runner) alwayson_script = self.get_script(alwayson_script_name, script_runner)
if alwayson_script == None: if alwayson_script is None:
raise HTTPException(status_code=422, detail=f"always on script {alwayson_script_name} not found") raise HTTPException(status_code=422, detail=f"always on script {alwayson_script_name} not found")
# Selectable script in always on script param check # Selectable script in always on script param check
if alwayson_script.alwayson == False: if alwayson_script.alwayson is False:
raise HTTPException(status_code=422, detail=f"Cannot have a selectable script in the always on scripts params") raise HTTPException(status_code=422, detail="Cannot have a selectable script in the always on scripts params")
# always on script with no arg should always run so you don't really need to add them to the requests # always on script with no arg should always run so you don't really need to add them to the requests
if "args" in request.alwayson_scripts[alwayson_script_name]: if "args" in request.alwayson_scripts[alwayson_script_name]:
script_args[alwayson_script.args_from:alwayson_script.args_to] = request.alwayson_scripts[alwayson_script_name]["args"] # min between arg length in scriptrunner and arg length in the request
for idx in range(0, min((alwayson_script.args_to - alwayson_script.args_from), len(request.alwayson_scripts[alwayson_script_name]["args"]))):
script_args[alwayson_script.args_from + idx] = request.alwayson_scripts[alwayson_script_name]["args"][idx]
return script_args return script_args
def text2imgapi(self, txt2imgreq: StableDiffusionTxt2ImgProcessingAPI): def text2imgapi(self, txt2imgreq: models.StableDiffusionTxt2ImgProcessingAPI):
script_runner = scripts.scripts_txt2img script_runner = scripts.scripts_txt2img
if not script_runner.scripts: if not script_runner.scripts:
script_runner.initialize_scripts(False) script_runner.initialize_scripts(False)
@ -303,25 +328,27 @@ class Api:
args.pop('save_images', None) args.pop('save_images', None)
with self.queue_lock: with self.queue_lock:
p = StableDiffusionProcessingTxt2Img(sd_model=shared.sd_model, **args) with closing(StableDiffusionProcessingTxt2Img(sd_model=shared.sd_model, **args)) as p:
p.scripts = script_runner p.scripts = script_runner
p.outpath_grids = opts.outdir_txt2img_grids p.outpath_grids = opts.outdir_txt2img_grids
p.outpath_samples = opts.outdir_txt2img_samples p.outpath_samples = opts.outdir_txt2img_samples
shared.state.begin() try:
if selectable_scripts != None: shared.state.begin(job="scripts_txt2img")
if selectable_scripts is not None:
p.script_args = script_args p.script_args = script_args
processed = scripts.scripts_txt2img.run(p, *p.script_args) # Need to pass args as list here processed = scripts.scripts_txt2img.run(p, *p.script_args) # Need to pass args as list here
else: else:
p.script_args = tuple(script_args) # Need to pass args as tuple here p.script_args = tuple(script_args) # Need to pass args as tuple here
processed = process_images(p) processed = process_images(p)
finally:
shared.state.end() shared.state.end()
b64images = list(map(encode_pil_to_base64, processed.images)) if send_images else [] b64images = list(map(encode_pil_to_base64, processed.images)) if send_images else []
return TextToImageResponse(images=b64images, parameters=vars(txt2imgreq), info=processed.js()) return models.TextToImageResponse(images=b64images, parameters=vars(txt2imgreq), info=processed.js())
def img2imgapi(self, img2imgreq: StableDiffusionImg2ImgProcessingAPI): def img2imgapi(self, img2imgreq: models.StableDiffusionImg2ImgProcessingAPI):
init_images = img2imgreq.init_images init_images = img2imgreq.init_images
if init_images is None: if init_images is None:
raise HTTPException(status_code=404, detail="Init image not found") raise HTTPException(status_code=404, detail="Init image not found")
@ -359,19 +386,21 @@ class Api:
args.pop('save_images', None) args.pop('save_images', None)
with self.queue_lock: with self.queue_lock:
p = StableDiffusionProcessingImg2Img(sd_model=shared.sd_model, **args) with closing(StableDiffusionProcessingImg2Img(sd_model=shared.sd_model, **args)) as p:
p.init_images = [decode_base64_to_image(x) for x in init_images] p.init_images = [decode_base64_to_image(x) for x in init_images]
p.scripts = script_runner p.scripts = script_runner
p.outpath_grids = opts.outdir_img2img_grids p.outpath_grids = opts.outdir_img2img_grids
p.outpath_samples = opts.outdir_img2img_samples p.outpath_samples = opts.outdir_img2img_samples
shared.state.begin() try:
if selectable_scripts != None: shared.state.begin(job="scripts_img2img")
if selectable_scripts is not None:
p.script_args = script_args p.script_args = script_args
processed = scripts.scripts_img2img.run(p, *p.script_args) # Need to pass args as list here processed = scripts.scripts_img2img.run(p, *p.script_args) # Need to pass args as list here
else: else:
p.script_args = tuple(script_args) # Need to pass args as tuple here p.script_args = tuple(script_args) # Need to pass args as tuple here
processed = process_images(p) processed = process_images(p)
finally:
shared.state.end() shared.state.end()
b64images = list(map(encode_pil_to_base64, processed.images)) if send_images else [] b64images = list(map(encode_pil_to_base64, processed.images)) if send_images else []
@ -380,9 +409,9 @@ class Api:
img2imgreq.init_images = None img2imgreq.init_images = None
img2imgreq.mask = None img2imgreq.mask = None
return ImageToImageResponse(images=b64images, parameters=vars(img2imgreq), info=processed.js()) return models.ImageToImageResponse(images=b64images, parameters=vars(img2imgreq), info=processed.js())
def extras_single_image_api(self, req: ExtrasSingleImageRequest): def extras_single_image_api(self, req: models.ExtrasSingleImageRequest):
reqDict = setUpscalers(req) reqDict = setUpscalers(req)
reqDict['image'] = decode_base64_to_image(reqDict['image']) reqDict['image'] = decode_base64_to_image(reqDict['image'])
@ -390,31 +419,26 @@ class Api:
with self.queue_lock: with self.queue_lock:
result = postprocessing.run_extras(extras_mode=0, image_folder="", input_dir="", output_dir="", save_output=False, **reqDict) result = postprocessing.run_extras(extras_mode=0, image_folder="", input_dir="", output_dir="", save_output=False, **reqDict)
return ExtrasSingleImageResponse(image=encode_pil_to_base64(result[0][0]), html_info=result[1]) return models.ExtrasSingleImageResponse(image=encode_pil_to_base64(result[0][0]), html_info=result[1])
def extras_batch_images_api(self, req: ExtrasBatchImagesRequest): def extras_batch_images_api(self, req: models.ExtrasBatchImagesRequest):
reqDict = setUpscalers(req) reqDict = setUpscalers(req)
def prepareFiles(file): image_list = reqDict.pop('imageList', [])
file = decode_base64_to_file(file.data, file_path=file.name) image_folder = [decode_base64_to_image(x.data) for x in image_list]
file.orig_name = file.name
return file
reqDict['image_folder'] = list(map(prepareFiles, reqDict['imageList']))
reqDict.pop('imageList')
with self.queue_lock: with self.queue_lock:
result = postprocessing.run_extras(extras_mode=1, image="", input_dir="", output_dir="", save_output=False, **reqDict) result = postprocessing.run_extras(extras_mode=1, image_folder=image_folder, image="", input_dir="", output_dir="", save_output=False, **reqDict)
return ExtrasBatchImagesResponse(images=list(map(encode_pil_to_base64, result[0])), html_info=result[1]) return models.ExtrasBatchImagesResponse(images=list(map(encode_pil_to_base64, result[0])), html_info=result[1])
def pnginfoapi(self, req: PNGInfoRequest): def pnginfoapi(self, req: models.PNGInfoRequest):
if(not req.image.strip()): if(not req.image.strip()):
return PNGInfoResponse(info="") return models.PNGInfoResponse(info="")
image = decode_base64_to_image(req.image.strip()) image = decode_base64_to_image(req.image.strip())
if image is None: if image is None:
return PNGInfoResponse(info="") return models.PNGInfoResponse(info="")
geninfo, items = images.read_info_from_image(image) geninfo, items = images.read_info_from_image(image)
if geninfo is None: if geninfo is None:
@ -422,13 +446,13 @@ class Api:
items = {**{'parameters': geninfo}, **items} items = {**{'parameters': geninfo}, **items}
return PNGInfoResponse(info=geninfo, items=items) return models.PNGInfoResponse(info=geninfo, items=items)
def progressapi(self, req: ProgressRequest = Depends()): def progressapi(self, req: models.ProgressRequest = Depends()):
# copy from check_progress_call of ui.py # copy from check_progress_call of ui.py
if shared.state.job_count == 0: if shared.state.job_count == 0:
return ProgressResponse(progress=0, eta_relative=0, state=shared.state.dict(), textinfo=shared.state.textinfo) return models.ProgressResponse(progress=0, eta_relative=0, state=shared.state.dict(), textinfo=shared.state.textinfo)
# avoid dividing zero # avoid dividing zero
progress = 0.01 progress = 0.01
@ -450,9 +474,9 @@ class Api:
if shared.state.current_image and not req.skip_current_image: if shared.state.current_image and not req.skip_current_image:
current_image = encode_pil_to_base64(shared.state.current_image) current_image = encode_pil_to_base64(shared.state.current_image)
return ProgressResponse(progress=progress, eta_relative=eta_relative, state=shared.state.dict(), current_image=current_image, textinfo=shared.state.textinfo) return models.ProgressResponse(progress=progress, eta_relative=eta_relative, state=shared.state.dict(), current_image=current_image, textinfo=shared.state.textinfo)
def interrogateapi(self, interrogatereq: InterrogateRequest): def interrogateapi(self, interrogatereq: models.InterrogateRequest):
image_b64 = interrogatereq.image image_b64 = interrogatereq.image
if image_b64 is None: if image_b64 is None:
raise HTTPException(status_code=404, detail="Image not found") raise HTTPException(status_code=404, detail="Image not found")
@ -469,7 +493,7 @@ class Api:
else: else:
raise HTTPException(status_code=404, detail="Model not found") raise HTTPException(status_code=404, detail="Model not found")
return InterrogateResponse(caption=processed) return models.InterrogateResponse(caption=processed)
def interruptapi(self): def interruptapi(self):
shared.state.interrupt() shared.state.interrupt()
@ -501,6 +525,10 @@ class Api:
return options return options
def set_config(self, req: Dict[str, Any]): def set_config(self, req: Dict[str, Any]):
checkpoint_name = req.get("sd_model_checkpoint", None)
if checkpoint_name is not None and checkpoint_name not in checkpoint_aliases:
raise RuntimeError(f"model {checkpoint_name!r} not found")
for k, v in req.items(): for k, v in req.items():
shared.opts.set(k, v) shared.opts.set(k, v)
@ -525,9 +553,20 @@ class Api:
for upscaler in shared.sd_upscalers for upscaler in shared.sd_upscalers
] ]
def get_latent_upscale_modes(self):
return [
{
"name": upscale_mode,
}
for upscale_mode in [*(shared.latent_upscale_modes or {})]
]
def get_sd_models(self): def get_sd_models(self):
return [{"title": x.title, "model_name": x.model_name, "hash": x.shorthash, "sha256": x.sha256, "filename": x.filename, "config": find_checkpoint_config_near_filename(x)} for x in checkpoints_list.values()] return [{"title": x.title, "model_name": x.model_name, "hash": x.shorthash, "sha256": x.sha256, "filename": x.filename, "config": find_checkpoint_config_near_filename(x)} for x in checkpoints_list.values()]
def get_sd_vaes(self):
return [{"model_name": x, "filename": vae_dict[x]} for x in vae_dict.keys()]
def get_hypernetworks(self): def get_hypernetworks(self):
return [{"name": name, "path": shared.hypernetworks[name]} for name in shared.hypernetworks] return [{"name": name, "path": shared.hypernetworks[name]} for name in shared.hypernetworks]
@ -566,48 +605,47 @@ class Api:
} }
def refresh_checkpoints(self): def refresh_checkpoints(self):
with self.queue_lock:
shared.refresh_checkpoints() shared.refresh_checkpoints()
def create_embedding(self, args: dict): def create_embedding(self, args: dict):
try: try:
shared.state.begin() shared.state.begin(job="create_embedding")
filename = create_embedding(**args) # create empty embedding filename = create_embedding(**args) # create empty embedding
sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings() # reload embeddings so new one can be immediately used sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings() # reload embeddings so new one can be immediately used
shared.state.end() return models.CreateResponse(info=f"create embedding filename: {filename}")
return CreateResponse(info = "create embedding filename: {filename}".format(filename = filename))
except AssertionError as e: except AssertionError as e:
return models.TrainResponse(info=f"create embedding error: {e}")
finally:
shared.state.end() shared.state.end()
return TrainResponse(info = "create embedding error: {error}".format(error = e))
def create_hypernetwork(self, args: dict): def create_hypernetwork(self, args: dict):
try: try:
shared.state.begin() shared.state.begin(job="create_hypernetwork")
filename = create_hypernetwork(**args) # create empty embedding filename = create_hypernetwork(**args) # create empty embedding
shared.state.end() return models.CreateResponse(info=f"create hypernetwork filename: {filename}")
return CreateResponse(info = "create hypernetwork filename: {filename}".format(filename = filename))
except AssertionError as e: except AssertionError as e:
return models.TrainResponse(info=f"create hypernetwork error: {e}")
finally:
shared.state.end() shared.state.end()
return TrainResponse(info = "create hypernetwork error: {error}".format(error = e))
def preprocess(self, args: dict): def preprocess(self, args: dict):
try: try:
shared.state.begin() shared.state.begin(job="preprocess")
preprocess(**args) # quick operation unless blip/booru interrogation is enabled preprocess(**args) # quick operation unless blip/booru interrogation is enabled
shared.state.end() shared.state.end()
return PreprocessResponse(info = 'preprocess complete') return models.PreprocessResponse(info='preprocess complete')
except KeyError as e: except KeyError as e:
return models.PreprocessResponse(info=f"preprocess error: invalid token: {e}")
except Exception as e:
return models.PreprocessResponse(info=f"preprocess error: {e}")
finally:
shared.state.end() shared.state.end()
return PreprocessResponse(info = "preprocess error: invalid token: {error}".format(error = e))
except AssertionError as e:
shared.state.end()
return PreprocessResponse(info = "preprocess error: {error}".format(error = e))
except FileNotFoundError as e:
shared.state.end()
return PreprocessResponse(info = 'preprocess error: {error}'.format(error = e))
def train_embedding(self, args: dict): def train_embedding(self, args: dict):
try: try:
shared.state.begin() shared.state.begin(job="train_embedding")
apply_optimizations = shared.opts.training_xattention_optimizations apply_optimizations = shared.opts.training_xattention_optimizations
error = None error = None
filename = '' filename = ''
@ -620,15 +658,15 @@ class Api:
finally: finally:
if not apply_optimizations: if not apply_optimizations:
sd_hijack.apply_optimizations() sd_hijack.apply_optimizations()
return models.TrainResponse(info=f"train embedding complete: filename: {filename} error: {error}")
except Exception as msg:
return models.TrainResponse(info=f"train embedding error: {msg}")
finally:
shared.state.end() shared.state.end()
return TrainResponse(info = "train embedding complete: filename: {filename} error: {error}".format(filename = filename, error = error))
except AssertionError as msg:
shared.state.end()
return TrainResponse(info = "train embedding error: {msg}".format(msg = msg))
def train_hypernetwork(self, args: dict): def train_hypernetwork(self, args: dict):
try: try:
shared.state.begin() shared.state.begin(job="train_hypernetwork")
shared.loaded_hypernetworks = [] shared.loaded_hypernetworks = []
apply_optimizations = shared.opts.training_xattention_optimizations apply_optimizations = shared.opts.training_xattention_optimizations
error = None error = None
@ -645,14 +683,16 @@ class Api:
if not apply_optimizations: if not apply_optimizations:
sd_hijack.apply_optimizations() sd_hijack.apply_optimizations()
shared.state.end() shared.state.end()
return TrainResponse(info="train embedding complete: filename: {filename} error: {error}".format(filename=filename, error=error)) return models.TrainResponse(info=f"train embedding complete: filename: {filename} error: {error}")
except AssertionError as msg: except Exception as exc:
return models.TrainResponse(info=f"train embedding error: {exc}")
finally:
shared.state.end() shared.state.end()
return TrainResponse(info="train embedding error: {error}".format(error=error))
def get_memory(self): def get_memory(self):
try: try:
import os, psutil import os
import psutil
process = psutil.Process(os.getpid()) process = psutil.Process(os.getpid())
res = process.memory_info() # only rss is cross-platform guaranteed so we dont rely on other values res = process.memory_info() # only rss is cross-platform guaranteed so we dont rely on other values
ram_total = 100 * res.rss / process.memory_percent() # and total memory is calculated as actual value is not cross-platform safe ram_total = 100 * res.rss / process.memory_percent() # and total memory is calculated as actual value is not cross-platform safe
@ -679,11 +719,24 @@ class Api:
'events': warnings, 'events': warnings,
} }
else: else:
cuda = { 'error': 'unavailable' } cuda = {'error': 'unavailable'}
except Exception as err: except Exception as err:
cuda = { 'error': f'{err}' } cuda = {'error': f'{err}'}
return MemoryResponse(ram = ram, cuda = cuda) return models.MemoryResponse(ram=ram, cuda=cuda)
def launch(self, server_name, port): def launch(self, server_name, port, root_path):
self.app.include_router(self.router) self.app.include_router(self.router)
uvicorn.run(self.app, host=server_name, port=port) uvicorn.run(self.app, host=server_name, port=port, timeout_keep_alive=shared.cmd_opts.timeout_keep_alive, root_path=root_path)
def kill_webui(self):
restart.stop_program()
def restart_webui(self):
if restart.is_restartable():
restart.restart_program()
return Response(status_code=501)
def stop_webui(request):
shared.state.server_command = "stop"
return Response("Stopping.")

View File

@ -1,4 +1,5 @@
import inspect import inspect
from pydantic import BaseModel, Field, create_model from pydantic import BaseModel, Field, create_model
from typing import Any, Optional from typing import Any, Optional
from typing_extensions import Literal from typing_extensions import Literal
@ -207,11 +208,10 @@ class PreprocessResponse(BaseModel):
fields = {} fields = {}
for key, metadata in opts.data_labels.items(): for key, metadata in opts.data_labels.items():
value = opts.data.get(key) value = opts.data.get(key)
optType = opts.typemap.get(type(metadata.default), type(value)) optType = opts.typemap.get(type(metadata.default), type(metadata.default)) if metadata.default else Any
if (metadata is not None): if metadata is not None:
fields.update({key: (Optional[optType], Field( fields.update({key: (Optional[optType], Field(default=metadata.default, description=metadata.label))})
default=metadata.default ,description=metadata.label))})
else: else:
fields.update({key: (Optional[optType], Field())}) fields.update({key: (Optional[optType], Field())})
@ -223,8 +223,9 @@ for key in _options:
if(_options[key].dest != 'help'): if(_options[key].dest != 'help'):
flag = _options[key] flag = _options[key]
_type = str _type = str
if _options[key].default is not None: _type = type(_options[key].default) if _options[key].default is not None:
flags.update({flag.dest: (_type,Field(default=flag.default, description=flag.help))}) _type = type(_options[key].default)
flags.update({flag.dest: (_type, Field(default=flag.default, description=flag.help))})
FlagsModel = create_model("Flags", **flags) FlagsModel = create_model("Flags", **flags)
@ -240,6 +241,9 @@ class UpscalerItem(BaseModel):
model_url: Optional[str] = Field(title="URL") model_url: Optional[str] = Field(title="URL")
scale: Optional[float] = Field(title="Scale") scale: Optional[float] = Field(title="Scale")
class LatentUpscalerModeItem(BaseModel):
name: str = Field(title="Name")
class SDModelItem(BaseModel): class SDModelItem(BaseModel):
title: str = Field(title="Title") title: str = Field(title="Title")
model_name: str = Field(title="Model Name") model_name: str = Field(title="Model Name")
@ -248,6 +252,10 @@ class SDModelItem(BaseModel):
filename: str = Field(title="Filename") filename: str = Field(title="Filename")
config: Optional[str] = Field(title="Config file") config: Optional[str] = Field(title="Config file")
class SDVaeItem(BaseModel):
model_name: str = Field(title="Model Name")
filename: str = Field(title="Filename")
class HypernetworkItem(BaseModel): class HypernetworkItem(BaseModel):
name: str = Field(title="Name") name: str = Field(title="Name")
path: Optional[str] = Field(title="Path") path: Optional[str] = Field(title="Path")
@ -266,10 +274,6 @@ class PromptStyleItem(BaseModel):
prompt: Optional[str] = Field(title="Prompt") prompt: Optional[str] = Field(title="Prompt")
negative_prompt: Optional[str] = Field(title="Negative Prompt") negative_prompt: Optional[str] = Field(title="Negative Prompt")
class ArtistItem(BaseModel):
name: str = Field(title="Name")
score: float = Field(title="Score")
category: str = Field(title="Category")
class EmbeddingItem(BaseModel): class EmbeddingItem(BaseModel):
step: Optional[int] = Field(title="Step", description="The number of steps that were used to train this embedding, if available") step: Optional[int] = Field(title="Step", description="The number of steps that were used to train this embedding, if available")
@ -286,6 +290,23 @@ class MemoryResponse(BaseModel):
ram: dict = Field(title="RAM", description="System memory stats") ram: dict = Field(title="RAM", description="System memory stats")
cuda: dict = Field(title="CUDA", description="nVidia CUDA memory stats") cuda: dict = Field(title="CUDA", description="nVidia CUDA memory stats")
class ScriptsList(BaseModel): class ScriptsList(BaseModel):
txt2img: list = Field(default=None,title="Txt2img", description="Titles of scripts (txt2img)") txt2img: list = Field(default=None, title="Txt2img", description="Titles of scripts (txt2img)")
img2img: list = Field(default=None,title="Img2img", description="Titles of scripts (img2img)") img2img: list = Field(default=None, title="Img2img", description="Titles of scripts (img2img)")
class ScriptArg(BaseModel):
label: str = Field(default=None, title="Label", description="Name of the argument in UI")
value: Optional[Any] = Field(default=None, title="Value", description="Default value of the argument")
minimum: Optional[Any] = Field(default=None, title="Minimum", description="Minimum allowed value for the argumentin UI")
maximum: Optional[Any] = Field(default=None, title="Minimum", description="Maximum allowed value for the argumentin UI")
step: Optional[Any] = Field(default=None, title="Minimum", description="Step for changing value of the argumentin UI")
choices: Optional[List[str]] = Field(default=None, title="Choices", description="Possible values for the argument")
class ScriptInfo(BaseModel):
name: str = Field(default=None, title="Name", description="Script name")
is_alwayson: bool = Field(default=None, title="IsAlwayson", description="Flag specifying whether this script is an alwayson script")
is_img2img: bool = Field(default=None, title="IsImg2img", description="Flag specifying whether this script is an img2img script")
args: List[ScriptArg] = Field(title="Arguments", description="List of script's arguments")

120
modules/cache.py Normal file
View File

@ -0,0 +1,120 @@
import json
import os.path
import threading
import time
from modules.paths import data_path, script_path
cache_filename = os.path.join(data_path, "cache.json")
cache_data = None
cache_lock = threading.Lock()
dump_cache_after = None
dump_cache_thread = None
def dump_cache():
"""
Marks cache for writing to disk. 5 seconds after no one else flags the cache for writing, it is written.
"""
global dump_cache_after
global dump_cache_thread
def thread_func():
global dump_cache_after
global dump_cache_thread
while dump_cache_after is not None and time.time() < dump_cache_after:
time.sleep(1)
with cache_lock:
with open(cache_filename, "w", encoding="utf8") as file:
json.dump(cache_data, file, indent=4)
dump_cache_after = None
dump_cache_thread = None
with cache_lock:
dump_cache_after = time.time() + 5
if dump_cache_thread is None:
dump_cache_thread = threading.Thread(name='cache-writer', target=thread_func)
dump_cache_thread.start()
def cache(subsection):
"""
Retrieves or initializes a cache for a specific subsection.
Parameters:
subsection (str): The subsection identifier for the cache.
Returns:
dict: The cache data for the specified subsection.
"""
global cache_data
if cache_data is None:
with cache_lock:
if cache_data is None:
if not os.path.isfile(cache_filename):
cache_data = {}
else:
try:
with open(cache_filename, "r", encoding="utf8") as file:
cache_data = json.load(file)
except Exception:
os.replace(cache_filename, os.path.join(script_path, "tmp", "cache.json"))
print('[ERROR] issue occurred while trying to read cache.json, move current cache to tmp/cache.json and create new cache')
cache_data = {}
s = cache_data.get(subsection, {})
cache_data[subsection] = s
return s
def cached_data_for_file(subsection, title, filename, func):
"""
Retrieves or generates data for a specific file, using a caching mechanism.
Parameters:
subsection (str): The subsection of the cache to use.
title (str): The title of the data entry in the subsection of the cache.
filename (str): The path to the file to be checked for modifications.
func (callable): A function that generates the data if it is not available in the cache.
Returns:
dict or None: The cached or generated data, or None if data generation fails.
The `cached_data_for_file` function implements a caching mechanism for data stored in files.
It checks if the data associated with the given `title` is present in the cache and compares the
modification time of the file with the cached modification time. If the file has been modified,
the cache is considered invalid and the data is regenerated using the provided `func`.
Otherwise, the cached data is returned.
If the data generation fails, None is returned to indicate the failure. Otherwise, the generated
or cached data is returned as a dictionary.
"""
existing_cache = cache(subsection)
ondisk_mtime = os.path.getmtime(filename)
entry = existing_cache.get(title)
if entry:
cached_mtime = entry.get("mtime", 0)
if ondisk_mtime > cached_mtime:
entry = None
if not entry or 'value' not in entry:
value = func()
if value is None:
return None
entry = {'mtime': ondisk_mtime, 'value': value}
existing_cache[title] = entry
dump_cache()
return entry['value']

View File

@ -1,10 +1,9 @@
from functools import wraps
import html import html
import sys
import threading import threading
import traceback
import time import time
from modules import shared, progress from modules import shared, progress, errors
queue_lock = threading.Lock() queue_lock = threading.Lock()
@ -20,21 +19,23 @@ def wrap_queued_call(func):
def wrap_gradio_gpu_call(func, extra_outputs=None): def wrap_gradio_gpu_call(func, extra_outputs=None):
@wraps(func)
def f(*args, **kwargs): def f(*args, **kwargs):
# if the first argument is a string that says "task(...)", it is treated as a job id # if the first argument is a string that says "task(...)", it is treated as a job id
if len(args) > 0 and type(args[0]) == str and args[0][0:5] == "task(" and args[0][-1] == ")": if args and type(args[0]) == str and args[0].startswith("task(") and args[0].endswith(")"):
id_task = args[0] id_task = args[0]
progress.add_task_to_queue(id_task) progress.add_task_to_queue(id_task)
else: else:
id_task = None id_task = None
with queue_lock: with queue_lock:
shared.state.begin() shared.state.begin(job=id_task)
progress.start_task(id_task) progress.start_task(id_task)
try: try:
res = func(*args, **kwargs) res = func(*args, **kwargs)
progress.record_results(id_task, res)
finally: finally:
progress.finish_task(id_task) progress.finish_task(id_task)
@ -46,6 +47,7 @@ def wrap_gradio_gpu_call(func, extra_outputs=None):
def wrap_gradio_call(func, extra_outputs=None, add_stats=False): def wrap_gradio_call(func, extra_outputs=None, add_stats=False):
@wraps(func)
def f(*args, extra_outputs_array=extra_outputs, **kwargs): def f(*args, extra_outputs_array=extra_outputs, **kwargs):
run_memmon = shared.opts.memmon_poll_rate > 0 and not shared.mem_mon.disabled and add_stats run_memmon = shared.opts.memmon_poll_rate > 0 and not shared.mem_mon.disabled and add_stats
if run_memmon: if run_memmon:
@ -55,16 +57,14 @@ def wrap_gradio_call(func, extra_outputs=None, add_stats=False):
try: try:
res = list(func(*args, **kwargs)) res = list(func(*args, **kwargs))
except Exception as e: except Exception as e:
# When printing out our debug argument list, do not print out more than a MB of text # When printing out our debug argument list,
max_debug_str_len = 131072 # (1024*1024)/8 # do not print out more than a 100 KB of text
max_debug_str_len = 131072
print("Error completing request", file=sys.stderr) message = "Error completing request"
argStr = f"Arguments: {str(args)} {str(kwargs)}" arg_str = f"Arguments: {args} {kwargs}"[:max_debug_str_len]
print(argStr[:max_debug_str_len], file=sys.stderr) if len(arg_str) > max_debug_str_len:
if len(argStr) > max_debug_str_len: arg_str += f" (Argument list truncated at {max_debug_str_len}/{len(arg_str)} characters)"
print(f"(Argument list truncated at {max_debug_str_len}/{len(argStr)} characters)", file=sys.stderr) errors.report(f"{message}\n{arg_str}", exc_info=True)
print(traceback.format_exc(), file=sys.stderr)
shared.state.job = "" shared.state.job = ""
shared.state.job_count = 0 shared.state.job_count = 0
@ -72,7 +72,8 @@ def wrap_gradio_call(func, extra_outputs=None, add_stats=False):
if extra_outputs_array is None: if extra_outputs_array is None:
extra_outputs_array = [None, ''] extra_outputs_array = [None, '']
res = extra_outputs_array + [f"<div class='error'>{html.escape(type(e).__name__+': '+str(e))}</div>"] error_message = f'{type(e).__name__}: {e}'
res = extra_outputs_array + [f"<div class='error'>{html.escape(error_message)}</div>"]
shared.state.skipped = False shared.state.skipped = False
shared.state.interrupted = False shared.state.interrupted = False
@ -84,9 +85,9 @@ def wrap_gradio_call(func, extra_outputs=None, add_stats=False):
elapsed = time.perf_counter() - t elapsed = time.perf_counter() - t
elapsed_m = int(elapsed // 60) elapsed_m = int(elapsed // 60)
elapsed_s = elapsed % 60 elapsed_s = elapsed % 60
elapsed_text = f"{elapsed_s:.2f}s" elapsed_text = f"{elapsed_s:.1f} sec."
if elapsed_m > 0: if elapsed_m > 0:
elapsed_text = f"{elapsed_m}m "+elapsed_text elapsed_text = f"{elapsed_m} min. "+elapsed_text
if run_memmon: if run_memmon:
mem_stats = {k: -(v//-(1024*1024)) for k, v in shared.mem_mon.stop().items()} mem_stats = {k: -(v//-(1024*1024)) for k, v in shared.mem_mon.stop().items()}
@ -94,16 +95,23 @@ def wrap_gradio_call(func, extra_outputs=None, add_stats=False):
reserved_peak = mem_stats['reserved_peak'] reserved_peak = mem_stats['reserved_peak']
sys_peak = mem_stats['system_peak'] sys_peak = mem_stats['system_peak']
sys_total = mem_stats['total'] sys_total = mem_stats['total']
sys_pct = round(sys_peak/max(sys_total, 1) * 100, 2) sys_pct = sys_peak/max(sys_total, 1) * 100
vram_html = f"<p class='vram'>Torch active/reserved: {active_peak}/{reserved_peak} MiB, <wbr>Sys VRAM: {sys_peak}/{sys_total} MiB ({sys_pct}%)</p>" toltip_a = "Active: peak amount of video memory used during generation (excluding cached data)"
toltip_r = "Reserved: total amout of video memory allocated by the Torch library "
toltip_sys = "System: peak amout of video memory allocated by all running programs, out of total capacity"
text_a = f"<abbr title='{toltip_a}'>A</abbr>: <span class='measurement'>{active_peak/1024:.2f} GB</span>"
text_r = f"<abbr title='{toltip_r}'>R</abbr>: <span class='measurement'>{reserved_peak/1024:.2f} GB</span>"
text_sys = f"<abbr title='{toltip_sys}'>Sys</abbr>: <span class='measurement'>{sys_peak/1024:.1f}/{sys_total/1024:g} GB</span> ({sys_pct:.1f}%)"
vram_html = f"<p class='vram'>{text_a}, <wbr>{text_r}, <wbr>{text_sys}</p>"
else: else:
vram_html = '' vram_html = ''
# last item is always HTML # last item is always HTML
res[-1] += f"<div class='performance'><p class='time'>Time taken: <wbr>{elapsed_text}</p>{vram_html}</div>" res[-1] += f"<div class='performance'><p class='time'>Time taken: <wbr><span class='measurement'>{elapsed_text}</span></p>{vram_html}</div>"
return tuple(res) return tuple(res)
return f return f

View File

@ -1,6 +1,7 @@
import argparse import argparse
import json
import os import os
from modules.paths_internal import models_path, script_path, data_path, extensions_dir, extensions_builtin_dir, sd_default_config, sd_model_file from modules.paths_internal import models_path, script_path, data_path, extensions_dir, extensions_builtin_dir, sd_default_config, sd_model_file # noqa: F401
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
@ -10,10 +11,11 @@ parser.add_argument("--skip-python-version-check", action='store_true', help="la
parser.add_argument("--skip-torch-cuda-test", action='store_true', help="launch.py argument: do not check if CUDA is able to work properly") parser.add_argument("--skip-torch-cuda-test", action='store_true', help="launch.py argument: do not check if CUDA is able to work properly")
parser.add_argument("--reinstall-xformers", action='store_true', help="launch.py argument: install the appropriate version of xformers even if you have some version already installed") parser.add_argument("--reinstall-xformers", action='store_true', help="launch.py argument: install the appropriate version of xformers even if you have some version already installed")
parser.add_argument("--reinstall-torch", action='store_true', help="launch.py argument: install the appropriate version of torch even if you have some version already installed") parser.add_argument("--reinstall-torch", action='store_true', help="launch.py argument: install the appropriate version of torch even if you have some version already installed")
parser.add_argument("--update-check", action='store_true', help="launch.py argument: chck for updates at startup") parser.add_argument("--update-check", action='store_true', help="launch.py argument: check for updates at startup")
parser.add_argument("--tests", type=str, default=None, help="launch.py argument: run tests in the specified directory") parser.add_argument("--test-server", action='store_true', help="launch.py argument: configure server for testing")
parser.add_argument("--no-tests", action='store_true', help="launch.py argument: do not run tests even if --tests option is specified") parser.add_argument("--skip-prepare-environment", action='store_true', help="launch.py argument: skip all environment preparation")
parser.add_argument("--skip-install", action='store_true', help="launch.py argument: skip installation of packages") parser.add_argument("--skip-install", action='store_true', help="launch.py argument: skip installation of packages")
parser.add_argument("--do-not-download-clip", action='store_true', help="do not download CLIP model even if it's not included in the checkpoint")
parser.add_argument("--data-dir", type=str, default=os.path.dirname(os.path.dirname(os.path.realpath(__file__))), help="base path where all user data is stored") parser.add_argument("--data-dir", type=str, default=os.path.dirname(os.path.dirname(os.path.realpath(__file__))), help="base path where all user data is stored")
parser.add_argument("--config", type=str, default=sd_default_config, help="path to config which constructs model",) parser.add_argument("--config", type=str, default=sd_default_config, help="path to config which constructs model",)
parser.add_argument("--ckpt", type=str, default=sd_model_file, help="path to checkpoint of stable diffusion model; if specified, this checkpoint will be added to the list of checkpoints and loaded",) parser.add_argument("--ckpt", type=str, default=sd_model_file, help="path to checkpoint of stable diffusion model; if specified, this checkpoint will be added to the list of checkpoints and loaded",)
@ -39,7 +41,8 @@ parser.add_argument("--precision", type=str, help="evaluate at this precision",
parser.add_argument("--upcast-sampling", action='store_true', help="upcast sampling. No effect with --no-half. Usually produces similar results to --no-half with better performance while using less memory.") parser.add_argument("--upcast-sampling", action='store_true', help="upcast sampling. No effect with --no-half. Usually produces similar results to --no-half with better performance while using less memory.")
parser.add_argument("--share", action='store_true', help="use share=True for gradio and make the UI accessible through their site") parser.add_argument("--share", action='store_true', help="use share=True for gradio and make the UI accessible through their site")
parser.add_argument("--ngrok", type=str, help="ngrok authtoken, alternative to gradio --share", default=None) parser.add_argument("--ngrok", type=str, help="ngrok authtoken, alternative to gradio --share", default=None)
parser.add_argument("--ngrok-region", type=str, help="The region in which ngrok should start.", default="us") parser.add_argument("--ngrok-region", type=str, help="does not do anything.", default="")
parser.add_argument("--ngrok-options", type=json.loads, help='The options to pass to ngrok in JSON format, e.g.: \'{"authtoken_from_env":true, "basic_auth":"user:password", "oauth_provider":"google", "oauth_allow_emails":"user@asdf.com"}\'', default=dict())
parser.add_argument("--enable-insecure-extension-access", action='store_true', help="enable extensions tab regardless of other options") parser.add_argument("--enable-insecure-extension-access", action='store_true', help="enable extensions tab regardless of other options")
parser.add_argument("--codeformer-models-path", type=str, help="Path to directory with codeformer model file(s).", default=os.path.join(models_path, 'Codeformer')) parser.add_argument("--codeformer-models-path", type=str, help="Path to directory with codeformer model file(s).", default=os.path.join(models_path, 'Codeformer'))
parser.add_argument("--gfpgan-models-path", type=str, help="Path to directory with GFPGAN model file(s).", default=os.path.join(models_path, 'GFPGAN')) parser.add_argument("--gfpgan-models-path", type=str, help="Path to directory with GFPGAN model file(s).", default=os.path.join(models_path, 'GFPGAN'))
@ -51,16 +54,16 @@ parser.add_argument("--xformers", action='store_true', help="enable xformers for
parser.add_argument("--force-enable-xformers", action='store_true', help="enable xformers for cross attention layers regardless of whether the checking code thinks you can run it; do not make bug reports if this fails to work") parser.add_argument("--force-enable-xformers", action='store_true', help="enable xformers for cross attention layers regardless of whether the checking code thinks you can run it; do not make bug reports if this fails to work")
parser.add_argument("--xformers-flash-attention", action='store_true', help="enable xformers with Flash Attention to improve reproducibility (supported for SD2.x or variant only)") parser.add_argument("--xformers-flash-attention", action='store_true', help="enable xformers with Flash Attention to improve reproducibility (supported for SD2.x or variant only)")
parser.add_argument("--deepdanbooru", action='store_true', help="does not do anything") parser.add_argument("--deepdanbooru", action='store_true', help="does not do anything")
parser.add_argument("--opt-split-attention", action='store_true', help="force-enables Doggettx's cross-attention layer optimization. By default, it's on for torch cuda.") parser.add_argument("--opt-split-attention", action='store_true', help="prefer Doggettx's cross-attention layer optimization for automatic choice of optimization")
parser.add_argument("--opt-sub-quad-attention", action='store_true', help="enable memory efficient sub-quadratic cross-attention layer optimization") parser.add_argument("--opt-sub-quad-attention", action='store_true', help="prefer memory efficient sub-quadratic cross-attention layer optimization for automatic choice of optimization")
parser.add_argument("--sub-quad-q-chunk-size", type=int, help="query chunk size for the sub-quadratic cross-attention layer optimization to use", default=1024) parser.add_argument("--sub-quad-q-chunk-size", type=int, help="query chunk size for the sub-quadratic cross-attention layer optimization to use", default=1024)
parser.add_argument("--sub-quad-kv-chunk-size", type=int, help="kv chunk size for the sub-quadratic cross-attention layer optimization to use", default=None) parser.add_argument("--sub-quad-kv-chunk-size", type=int, help="kv chunk size for the sub-quadratic cross-attention layer optimization to use", default=None)
parser.add_argument("--sub-quad-chunk-threshold", type=int, help="the percentage of VRAM threshold for the sub-quadratic cross-attention layer optimization to use chunking", default=None) parser.add_argument("--sub-quad-chunk-threshold", type=int, help="the percentage of VRAM threshold for the sub-quadratic cross-attention layer optimization to use chunking", default=None)
parser.add_argument("--opt-split-attention-invokeai", action='store_true', help="force-enables InvokeAI's cross-attention layer optimization. By default, it's on when cuda is unavailable.") parser.add_argument("--opt-split-attention-invokeai", action='store_true', help="prefer InvokeAI's cross-attention layer optimization for automatic choice of optimization")
parser.add_argument("--opt-split-attention-v1", action='store_true', help="enable older version of split attention optimization that does not consume all the VRAM it can find") parser.add_argument("--opt-split-attention-v1", action='store_true', help="prefer older version of split attention optimization for automatic choice of optimization")
parser.add_argument("--opt-sdp-attention", action='store_true', help="enable scaled dot product cross-attention layer optimization; requires PyTorch 2.*") parser.add_argument("--opt-sdp-attention", action='store_true', help="prefer scaled dot product cross-attention layer optimization for automatic choice of optimization; requires PyTorch 2.*")
parser.add_argument("--opt-sdp-no-mem-attention", action='store_true', help="enable scaled dot product cross-attention layer optimization without memory efficient attention, makes image generation deterministic; requires PyTorch 2.*") parser.add_argument("--opt-sdp-no-mem-attention", action='store_true', help="prefer scaled dot product cross-attention layer optimization without memory efficient attention for automatic choice of optimization, makes image generation deterministic; requires PyTorch 2.*")
parser.add_argument("--disable-opt-split-attention", action='store_true', help="force-disables cross-attention layer optimization") parser.add_argument("--disable-opt-split-attention", action='store_true', help="prefer no cross-attention layer optimization for automatic choice of optimization")
parser.add_argument("--disable-nan-check", action='store_true', help="do not check if produced images/latent spaces have nans; useful for running without a checkpoint in CI") parser.add_argument("--disable-nan-check", action='store_true', help="do not check if produced images/latent spaces have nans; useful for running without a checkpoint in CI")
parser.add_argument("--use-cpu", nargs='+', help="use CPU as torch device for specified modules", default=[], type=str.lower) parser.add_argument("--use-cpu", nargs='+', help="use CPU as torch device for specified modules", default=[], type=str.lower)
parser.add_argument("--listen", action='store_true', help="launch gradio with 0.0.0.0 as server name, allowing to respond to network requests") parser.add_argument("--listen", action='store_true', help="launch gradio with 0.0.0.0 as server name, allowing to respond to network requests")
@ -75,6 +78,7 @@ parser.add_argument("--gradio-auth", type=str, help='set gradio authentication l
parser.add_argument("--gradio-auth-path", type=str, help='set gradio authentication file path ex. "/path/to/auth/file" same auth format as --gradio-auth', default=None) parser.add_argument("--gradio-auth-path", type=str, help='set gradio authentication file path ex. "/path/to/auth/file" same auth format as --gradio-auth', default=None)
parser.add_argument("--gradio-img2img-tool", type=str, help='does not do anything') parser.add_argument("--gradio-img2img-tool", type=str, help='does not do anything')
parser.add_argument("--gradio-inpaint-tool", type=str, help="does not do anything") parser.add_argument("--gradio-inpaint-tool", type=str, help="does not do anything")
parser.add_argument("--gradio-allowed-path", action='append', help="add path to gradio's allowed_paths, make it possible to serve files from it")
parser.add_argument("--opt-channelslast", action='store_true', help="change memory type for stable diffusion to channels last") parser.add_argument("--opt-channelslast", action='store_true', help="change memory type for stable diffusion to channels last")
parser.add_argument("--styles-file", type=str, help="filename to use for styles", default=os.path.join(data_path, 'styles.csv')) parser.add_argument("--styles-file", type=str, help="filename to use for styles", default=os.path.join(data_path, 'styles.csv'))
parser.add_argument("--autolaunch", action='store_true', help="open the webui URL in the system's default browser upon launch", default=False) parser.add_argument("--autolaunch", action='store_true', help="open the webui URL in the system's default browser upon launch", default=False)
@ -95,9 +99,14 @@ parser.add_argument("--cors-allow-origins", type=str, help="Allowed CORS origin(
parser.add_argument("--cors-allow-origins-regex", type=str, help="Allowed CORS origin(s) in the form of a single regular expression", default=None) parser.add_argument("--cors-allow-origins-regex", type=str, help="Allowed CORS origin(s) in the form of a single regular expression", default=None)
parser.add_argument("--tls-keyfile", type=str, help="Partially enables TLS, requires --tls-certfile to fully function", default=None) parser.add_argument("--tls-keyfile", type=str, help="Partially enables TLS, requires --tls-certfile to fully function", default=None)
parser.add_argument("--tls-certfile", type=str, help="Partially enables TLS, requires --tls-keyfile to fully function", default=None) parser.add_argument("--tls-certfile", type=str, help="Partially enables TLS, requires --tls-keyfile to fully function", default=None)
parser.add_argument("--disable-tls-verify", action="store_false", help="When passed, enables the use of self-signed certificates.", default=None)
parser.add_argument("--server-name", type=str, help="Sets hostname of server", default=None) parser.add_argument("--server-name", type=str, help="Sets hostname of server", default=None)
parser.add_argument("--gradio-queue", action='store_true', help="does not do anything", default=True) parser.add_argument("--gradio-queue", action='store_true', help="does not do anything", default=True)
parser.add_argument("--no-gradio-queue", action='store_true', help="Disables gradio queue; causes the webpage to use http requests instead of websockets; was the defaul in earlier versions") parser.add_argument("--no-gradio-queue", action='store_true', help="Disables gradio queue; causes the webpage to use http requests instead of websockets; was the defaul in earlier versions")
parser.add_argument("--skip-version-check", action='store_true', help="Do not check versions of torch and xformers") parser.add_argument("--skip-version-check", action='store_true', help="Do not check versions of torch and xformers")
parser.add_argument("--no-hashing", action='store_true', help="disable sha256 hashing of checkpoints to help loading performance", default=False) parser.add_argument("--no-hashing", action='store_true', help="disable sha256 hashing of checkpoints to help loading performance", default=False)
parser.add_argument("--no-download-sd-model", action='store_true', help="don't download SD1.5 model even if no model is found in --ckpt-dir", default=False) parser.add_argument("--no-download-sd-model", action='store_true', help="don't download SD1.5 model even if no model is found in --ckpt-dir", default=False)
parser.add_argument('--subpath', type=str, help='customize the subpath for gradio, use with reverse proxy')
parser.add_argument('--add-stop-route', action='store_true', help='add /_stop route to stop server')
parser.add_argument('--api-server-stop', action='store_true', help='enable server stop/restart/kill via api')
parser.add_argument('--timeout-keep-alive', type=int, default=30, help='set timeout_keep_alive for uvicorn')

View File

@ -1,14 +1,12 @@
# this file is copied from CodeFormer repository. Please see comment in modules/codeformer_model.py # this file is copied from CodeFormer repository. Please see comment in modules/codeformer_model.py
import math import math
import numpy as np
import torch import torch
from torch import nn, Tensor from torch import nn, Tensor
import torch.nn.functional as F import torch.nn.functional as F
from typing import Optional, List from typing import Optional
from modules.codeformer.vqgan_arch import * from modules.codeformer.vqgan_arch import VQAutoEncoder, ResBlock
from basicsr.utils import get_root_logger
from basicsr.utils.registry import ARCH_REGISTRY from basicsr.utils.registry import ARCH_REGISTRY
def calc_mean_std(feat, eps=1e-5): def calc_mean_std(feat, eps=1e-5):
@ -163,8 +161,8 @@ class Fuse_sft_block(nn.Module):
class CodeFormer(VQAutoEncoder): class CodeFormer(VQAutoEncoder):
def __init__(self, dim_embd=512, n_head=8, n_layers=9, def __init__(self, dim_embd=512, n_head=8, n_layers=9,
codebook_size=1024, latent_size=256, codebook_size=1024, latent_size=256,
connect_list=['32', '64', '128', '256'], connect_list=('32', '64', '128', '256'),
fix_modules=['quantize','generator']): fix_modules=('quantize', 'generator')):
super(CodeFormer, self).__init__(512, 64, [1, 2, 2, 4, 4, 8], 'nearest',2, [16], codebook_size) super(CodeFormer, self).__init__(512, 64, [1, 2, 2, 4, 4, 8], 'nearest',2, [16], codebook_size)
if fix_modules is not None: if fix_modules is not None:

View File

@ -2,14 +2,12 @@
''' '''
VQGAN code, adapted from the original created by the Unleashing Transformers authors: VQGAN code, adapted from the original created by the Unleashing Transformers authors:
https://github.com/samb-t/unleashing-transformers/blob/master/models/vqgan.py https://ghproxy.com/https://github.com/samb-t/unleashing-transformers/blob/master/models/vqgan.py
''' '''
import numpy as np
import torch import torch
import torch.nn as nn import torch.nn as nn
import torch.nn.functional as F import torch.nn.functional as F
import copy
from basicsr.utils import get_root_logger from basicsr.utils import get_root_logger
from basicsr.utils.registry import ARCH_REGISTRY from basicsr.utils.registry import ARCH_REGISTRY
@ -328,7 +326,7 @@ class Generator(nn.Module):
@ARCH_REGISTRY.register() @ARCH_REGISTRY.register()
class VQAutoEncoder(nn.Module): class VQAutoEncoder(nn.Module):
def __init__(self, img_size, nf, ch_mult, quantizer="nearest", res_blocks=2, attn_resolutions=[16], codebook_size=1024, emb_dim=256, def __init__(self, img_size, nf, ch_mult, quantizer="nearest", res_blocks=2, attn_resolutions=None, codebook_size=1024, emb_dim=256,
beta=0.25, gumbel_straight_through=False, gumbel_kl_weight=1e-8, model_path=None): beta=0.25, gumbel_straight_through=False, gumbel_kl_weight=1e-8, model_path=None):
super().__init__() super().__init__()
logger = get_root_logger() logger = get_root_logger()
@ -339,7 +337,7 @@ class VQAutoEncoder(nn.Module):
self.embed_dim = emb_dim self.embed_dim = emb_dim
self.ch_mult = ch_mult self.ch_mult = ch_mult
self.resolution = img_size self.resolution = img_size
self.attn_resolutions = attn_resolutions self.attn_resolutions = attn_resolutions or [16]
self.quantizer_type = quantizer self.quantizer_type = quantizer
self.encoder = Encoder( self.encoder = Encoder(
self.in_channels, self.in_channels,

View File

@ -1,13 +1,11 @@
import os import os
import sys
import traceback
import cv2 import cv2
import torch import torch
import modules.face_restoration import modules.face_restoration
import modules.shared import modules.shared
from modules import shared, devices, modelloader from modules import shared, devices, modelloader, errors
from modules.paths import models_path from modules.paths import models_path
# codeformer people made a choice to include modified basicsr library to their project which makes # codeformer people made a choice to include modified basicsr library to their project which makes
@ -15,16 +13,13 @@ from modules.paths import models_path
# I am making a choice to include some files from codeformer to work around this issue. # I am making a choice to include some files from codeformer to work around this issue.
model_dir = "Codeformer" model_dir = "Codeformer"
model_path = os.path.join(models_path, model_dir) model_path = os.path.join(models_path, model_dir)
model_url = 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth' model_url = 'https://ghproxy.com/https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth'
have_codeformer = False
codeformer = None codeformer = None
def setup_model(dirname): def setup_model(dirname):
global model_path os.makedirs(model_path, exist_ok=True)
if not os.path.exists(model_path):
os.makedirs(model_path)
path = modules.paths.paths.get("CodeFormer", None) path = modules.paths.paths.get("CodeFormer", None)
if path is None: if path is None:
@ -33,11 +28,9 @@ def setup_model(dirname):
try: try:
from torchvision.transforms.functional import normalize from torchvision.transforms.functional import normalize
from modules.codeformer.codeformer_arch import CodeFormer from modules.codeformer.codeformer_arch import CodeFormer
from basicsr.utils.download_util import load_file_from_url from basicsr.utils import img2tensor, tensor2img
from basicsr.utils import imwrite, img2tensor, tensor2img
from facelib.utils.face_restoration_helper import FaceRestoreHelper from facelib.utils.face_restoration_helper import FaceRestoreHelper
from facelib.detection.retinaface import retinaface from facelib.detection.retinaface import retinaface
from modules.shared import cmd_opts
net_class = CodeFormer net_class = CodeFormer
@ -96,7 +89,7 @@ def setup_model(dirname):
self.face_helper.get_face_landmarks_5(only_center_face=False, resize=640, eye_dist_threshold=5) self.face_helper.get_face_landmarks_5(only_center_face=False, resize=640, eye_dist_threshold=5)
self.face_helper.align_warp_face() self.face_helper.align_warp_face()
for idx, cropped_face in enumerate(self.face_helper.cropped_faces): for cropped_face in self.face_helper.cropped_faces:
cropped_face_t = img2tensor(cropped_face / 255., bgr2rgb=True, float32=True) cropped_face_t = img2tensor(cropped_face / 255., bgr2rgb=True, float32=True)
normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True) normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True)
cropped_face_t = cropped_face_t.unsqueeze(0).to(devices.device_codeformer) cropped_face_t = cropped_face_t.unsqueeze(0).to(devices.device_codeformer)
@ -106,9 +99,9 @@ def setup_model(dirname):
output = self.net(cropped_face_t, w=w if w is not None else shared.opts.code_former_weight, adain=True)[0] output = self.net(cropped_face_t, w=w if w is not None else shared.opts.code_former_weight, adain=True)[0]
restored_face = tensor2img(output, rgb2bgr=True, min_max=(-1, 1)) restored_face = tensor2img(output, rgb2bgr=True, min_max=(-1, 1))
del output del output
torch.cuda.empty_cache() devices.torch_gc()
except Exception as error: except Exception:
print(f'\tFailed inference for CodeFormer: {error}', file=sys.stderr) errors.report('Failed inference for CodeFormer', exc_info=True)
restored_face = tensor2img(cropped_face_t, rgb2bgr=True, min_max=(-1, 1)) restored_face = tensor2img(cropped_face_t, rgb2bgr=True, min_max=(-1, 1))
restored_face = restored_face.astype('uint8') restored_face = restored_face.astype('uint8')
@ -129,15 +122,11 @@ def setup_model(dirname):
return restored_img return restored_img
global have_codeformer
have_codeformer = True
global codeformer global codeformer
codeformer = FaceRestorerCodeFormer(dirname) codeformer = FaceRestorerCodeFormer(dirname)
shared.face_restorers.append(codeformer) shared.face_restorers.append(codeformer)
except Exception: except Exception:
print("Error setting up CodeFormer:", file=sys.stderr) errors.report("Error setting up CodeFormer", exc_info=True)
print(traceback.format_exc(), file=sys.stderr)
# sys.path = stored_sys_path # sys.path = stored_sys_path

197
modules/config_states.py Normal file
View File

@ -0,0 +1,197 @@
"""
Supports saving and restoring webui and extensions from a known working set of commits
"""
import os
import json
import time
import tqdm
from datetime import datetime
from collections import OrderedDict
import git
from modules import shared, extensions, errors
from modules.paths_internal import script_path, config_states_dir
all_config_states = OrderedDict()
def list_config_states():
global all_config_states
all_config_states.clear()
os.makedirs(config_states_dir, exist_ok=True)
config_states = []
for filename in os.listdir(config_states_dir):
if filename.endswith(".json"):
path = os.path.join(config_states_dir, filename)
with open(path, "r", encoding="utf-8") as f:
j = json.load(f)
j["filepath"] = path
config_states.append(j)
config_states = sorted(config_states, key=lambda cs: cs["created_at"], reverse=True)
for cs in config_states:
timestamp = time.asctime(time.gmtime(cs["created_at"]))
name = cs.get("name", "Config")
full_name = f"{name}: {timestamp}"
all_config_states[full_name] = cs
return all_config_states
def get_webui_config():
webui_repo = None
try:
if os.path.exists(os.path.join(script_path, ".git")):
webui_repo = git.Repo(script_path)
except Exception:
errors.report(f"Error reading webui git info from {script_path}", exc_info=True)
webui_remote = None
webui_commit_hash = None
webui_commit_date = None
webui_branch = None
if webui_repo and not webui_repo.bare:
try:
webui_remote = next(webui_repo.remote().urls, None)
head = webui_repo.head.commit
webui_commit_date = webui_repo.head.commit.committed_date
webui_commit_hash = head.hexsha
webui_branch = webui_repo.active_branch.name
except Exception:
webui_remote = None
return {
"remote": webui_remote,
"commit_hash": webui_commit_hash,
"commit_date": webui_commit_date,
"branch": webui_branch,
}
def get_extension_config():
ext_config = {}
for ext in extensions.extensions:
ext.read_info_from_repo()
entry = {
"name": ext.name,
"path": ext.path,
"enabled": ext.enabled,
"is_builtin": ext.is_builtin,
"remote": ext.remote,
"commit_hash": ext.commit_hash,
"commit_date": ext.commit_date,
"branch": ext.branch,
"have_info_from_repo": ext.have_info_from_repo
}
ext_config[ext.name] = entry
return ext_config
def get_config():
creation_time = datetime.now().timestamp()
webui_config = get_webui_config()
ext_config = get_extension_config()
return {
"created_at": creation_time,
"webui": webui_config,
"extensions": ext_config
}
def restore_webui_config(config):
print("* Restoring webui state...")
if "webui" not in config:
print("Error: No webui data saved to config")
return
webui_config = config["webui"]
if "commit_hash" not in webui_config:
print("Error: No commit saved to webui config")
return
webui_commit_hash = webui_config.get("commit_hash", None)
webui_repo = None
try:
if os.path.exists(os.path.join(script_path, ".git")):
webui_repo = git.Repo(script_path)
except Exception:
errors.report(f"Error reading webui git info from {script_path}", exc_info=True)
return
try:
webui_repo.git.fetch(all=True)
webui_repo.git.reset(webui_commit_hash, hard=True)
print(f"* Restored webui to commit {webui_commit_hash}.")
except Exception:
errors.report(f"Error restoring webui to commit{webui_commit_hash}")
def restore_extension_config(config):
print("* Restoring extension state...")
if "extensions" not in config:
print("Error: No extension data saved to config")
return
ext_config = config["extensions"]
results = []
disabled = []
for ext in tqdm.tqdm(extensions.extensions):
if ext.is_builtin:
continue
ext.read_info_from_repo()
current_commit = ext.commit_hash
if ext.name not in ext_config:
ext.disabled = True
disabled.append(ext.name)
results.append((ext, current_commit[:8], False, "Saved extension state not found in config, marking as disabled"))
continue
entry = ext_config[ext.name]
if "commit_hash" in entry and entry["commit_hash"]:
try:
ext.fetch_and_reset_hard(entry["commit_hash"])
ext.read_info_from_repo()
if current_commit != entry["commit_hash"]:
results.append((ext, current_commit[:8], True, entry["commit_hash"][:8]))
except Exception as ex:
results.append((ext, current_commit[:8], False, ex))
else:
results.append((ext, current_commit[:8], False, "No commit hash found in config"))
if not entry.get("enabled", False):
ext.disabled = True
disabled.append(ext.name)
else:
ext.disabled = False
shared.opts.disabled_extensions = disabled
shared.opts.save(shared.config_filename)
print("* Finished restoring extensions. Results:")
for ext, prev_commit, success, result in results:
if success:
print(f" + {ext.name}: {prev_commit} -> {result}")
else:
print(f" ! {ext.name}: FAILURE ({result})")

View File

@ -2,7 +2,6 @@ import os
import re import re
import torch import torch
from PIL import Image
import numpy as np import numpy as np
from modules import modelloader, paths, deepbooru_model, devices, images, shared from modules import modelloader, paths, deepbooru_model, devices, images, shared
@ -20,7 +19,7 @@ class DeepDanbooru:
files = modelloader.load_models( files = modelloader.load_models(
model_path=os.path.join(paths.models_path, "torch_deepdanbooru"), model_path=os.path.join(paths.models_path, "torch_deepdanbooru"),
model_url='https://github.com/AUTOMATIC1111/TorchDeepDanbooru/releases/download/v1/model-resnet_custom_v3.pt', model_url='https://ghproxy.com/https://github.com/AUTOMATIC1111/TorchDeepDanbooru/releases/download/v1/model-resnet_custom_v3.pt',
ext_filter=[".pt"], ext_filter=[".pt"],
download_name='model-resnet_custom_v3.pt', download_name='model-resnet_custom_v3.pt',
) )
@ -79,7 +78,7 @@ class DeepDanbooru:
res = [] res = []
filtertags = set([x.strip().replace(' ', '_') for x in shared.opts.deepbooru_filter_tags.split(",")]) filtertags = {x.strip().replace(' ', '_') for x in shared.opts.deepbooru_filter_tags.split(",")}
for tag in [x for x in tags if x not in filtertags]: for tag in [x for x in tags if x not in filtertags]:
probability = probability_dict[tag] probability = probability_dict[tag]

View File

@ -4,7 +4,7 @@ import torch.nn.functional as F
from modules import devices from modules import devices
# see https://github.com/AUTOMATIC1111/TorchDeepDanbooru for more # see https://ghproxy.com/https://github.com/AUTOMATIC1111/TorchDeepDanbooru for more
class DeepDanbooruModel(nn.Module): class DeepDanbooruModel(nn.Module):

View File

@ -1,5 +1,7 @@
import sys import sys
import contextlib import contextlib
from functools import lru_cache
import torch import torch
from modules import errors from modules import errors
@ -13,13 +15,6 @@ def has_mps() -> bool:
else: else:
return mac_specific.has_mps return mac_specific.has_mps
def extract_device_id(args, name):
for x in range(len(args)):
if name in args[x]:
return args[x + 1]
return None
def get_cuda_device_string(): def get_cuda_device_string():
from modules import shared from modules import shared
@ -54,18 +49,22 @@ def get_device_for(task):
def torch_gc(): def torch_gc():
if torch.cuda.is_available(): if torch.cuda.is_available():
with torch.cuda.device(get_cuda_device_string()): with torch.cuda.device(get_cuda_device_string()):
torch.cuda.empty_cache() torch.cuda.empty_cache()
torch.cuda.ipc_collect() torch.cuda.ipc_collect()
if has_mps():
mac_specific.torch_mps_gc()
def enable_tf32(): def enable_tf32():
if torch.cuda.is_available(): if torch.cuda.is_available():
# enabling benchmark option seems to enable a range of cards to do fp16 when they otherwise can't # enabling benchmark option seems to enable a range of cards to do fp16 when they otherwise can't
# see https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4407 # see https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4407
if any([torch.cuda.get_device_capability(devid) == (7, 5) for devid in range(0, torch.cuda.device_count())]): if any(torch.cuda.get_device_capability(devid) == (7, 5) for devid in range(0, torch.cuda.device_count())):
torch.backends.cudnn.benchmark = True torch.backends.cudnn.benchmark = True
torch.backends.cuda.matmul.allow_tf32 = True torch.backends.cuda.matmul.allow_tf32 = True
@ -92,14 +91,18 @@ def cond_cast_float(input):
def randn(seed, shape): def randn(seed, shape):
from modules.shared import opts
torch.manual_seed(seed) torch.manual_seed(seed)
if device.type == 'mps': if opts.randn_source == "CPU" or device.type == 'mps':
return torch.randn(shape, device=cpu).to(device) return torch.randn(shape, device=cpu).to(device)
return torch.randn(shape, device=device) return torch.randn(shape, device=device)
def randn_without_seed(shape): def randn_without_seed(shape):
if device.type == 'mps': from modules.shared import opts
if opts.randn_source == "CPU" or device.type == 'mps':
return torch.randn(shape, device=cpu).to(device) return torch.randn(shape, device=cpu).to(device)
return torch.randn(shape, device=device) return torch.randn(shape, device=device)
@ -150,3 +153,19 @@ def test_for_nans(x, where):
message += " Use --disable-nan-check commandline argument to disable this check." message += " Use --disable-nan-check commandline argument to disable this check."
raise NansException(message) raise NansException(message)
@lru_cache
def first_time_calculation():
"""
just do any calculation with pytorch layers - the first time this is done it allocaltes about 700MB of memory and
spends about 2.7 seconds doing that, at least wih NVidia.
"""
x = torch.zeros((1, 1)).to(device, dtype)
linear = torch.nn.Linear(1, 1).to(device, dtype)
linear(x)
x = torch.zeros((1, 1, 3, 3)).to(device, dtype)
conv2d = torch.nn.Conv2d(1, 1, (3, 3)).to(device, dtype)
conv2d(x)

View File

@ -1,8 +1,42 @@
import sys import sys
import textwrap
import traceback import traceback
exception_records = []
def record_exception():
_, e, tb = sys.exc_info()
if e is None:
return
if exception_records and exception_records[-1] == e:
return
exception_records.append((e, tb))
if len(exception_records) > 5:
exception_records.pop(0)
def report(message: str, *, exc_info: bool = False) -> None:
"""
Print an error message to stderr, with optional traceback.
"""
record_exception()
for line in message.splitlines():
print("***", line, file=sys.stderr)
if exc_info:
print(textwrap.indent(traceback.format_exc(), " "), file=sys.stderr)
print("---", file=sys.stderr)
def print_error_explanation(message): def print_error_explanation(message):
record_exception()
lines = message.strip().split("\n") lines = message.strip().split("\n")
max_len = max([len(x) for x in lines]) max_len = max([len(x) for x in lines])
@ -12,15 +46,21 @@ def print_error_explanation(message):
print('=' * max_len, file=sys.stderr) print('=' * max_len, file=sys.stderr)
def display(e: Exception, task): def display(e: Exception, task, *, full_traceback=False):
record_exception()
print(f"{task or 'error'}: {type(e).__name__}", file=sys.stderr) print(f"{task or 'error'}: {type(e).__name__}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr) te = traceback.TracebackException.from_exception(e)
if full_traceback:
# include frames leading up to the try-catch block
te.stack = traceback.StackSummary(traceback.extract_stack()[:-2] + te.stack)
print(*te.format(), sep="", file=sys.stderr)
message = str(e) message = str(e)
if "copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768])" in message: if "copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768])" in message:
print_error_explanation(""" print_error_explanation("""
The most likely cause of this is you are trying to load Stable Diffusion 2.0 model without specifying its config file. The most likely cause of this is you are trying to load Stable Diffusion 2.0 model without specifying its config file.
See https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20 for how to solve this. See https://ghproxy.com/https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20 for how to solve this.
""") """)
@ -28,6 +68,8 @@ already_displayed = {}
def display_once(e: Exception, task): def display_once(e: Exception, task):
record_exception()
if task in already_displayed: if task in already_displayed:
return return

View File

@ -1,24 +1,20 @@
import os import sys
import numpy as np import numpy as np
import torch import torch
from PIL import Image from PIL import Image
from basicsr.utils.download_util import load_file_from_url
import modules.esrgan_model_arch as arch import modules.esrgan_model_arch as arch
from modules import shared, modelloader, images, devices from modules import modelloader, images, devices
from modules.upscaler import Upscaler, UpscalerData
from modules.shared import opts from modules.shared import opts
from modules.upscaler import Upscaler, UpscalerData
def mod2normal(state_dict): def mod2normal(state_dict):
# this code is copied from https://github.com/victorca25/iNNfer # this code is copied from https://ghproxy.com/https://github.com/victorca25/iNNfer
if 'conv_first.weight' in state_dict: if 'conv_first.weight' in state_dict:
crt_net = {} crt_net = {}
items = [] items = list(state_dict)
for k, v in state_dict.items():
items.append(k)
crt_net['model.0.weight'] = state_dict['conv_first.weight'] crt_net['model.0.weight'] = state_dict['conv_first.weight']
crt_net['model.0.bias'] = state_dict['conv_first.bias'] crt_net['model.0.bias'] = state_dict['conv_first.bias']
@ -48,13 +44,11 @@ def mod2normal(state_dict):
def resrgan2normal(state_dict, nb=23): def resrgan2normal(state_dict, nb=23):
# this code is copied from https://github.com/victorca25/iNNfer # this code is copied from https://ghproxy.com/https://github.com/victorca25/iNNfer
if "conv_first.weight" in state_dict and "body.0.rdb1.conv1.weight" in state_dict: if "conv_first.weight" in state_dict and "body.0.rdb1.conv1.weight" in state_dict:
re8x = 0 re8x = 0
crt_net = {} crt_net = {}
items = [] items = list(state_dict)
for k, v in state_dict.items():
items.append(k)
crt_net['model.0.weight'] = state_dict['conv_first.weight'] crt_net['model.0.weight'] = state_dict['conv_first.weight']
crt_net['model.0.bias'] = state_dict['conv_first.bias'] crt_net['model.0.bias'] = state_dict['conv_first.bias']
@ -78,7 +72,7 @@ def resrgan2normal(state_dict, nb=23):
crt_net['model.6.bias'] = state_dict['conv_up2.bias'] crt_net['model.6.bias'] = state_dict['conv_up2.bias']
if 'conv_up3.weight' in state_dict: if 'conv_up3.weight' in state_dict:
# modification supporting: https://github.com/ai-forever/Real-ESRGAN/blob/main/RealESRGAN/rrdbnet_arch.py # modification supporting: https://ghproxy.com/https://github.com/ai-forever/Real-ESRGAN/blob/main/RealESRGAN/rrdbnet_arch.py
re8x = 3 re8x = 3
crt_net['model.9.weight'] = state_dict['conv_up3.weight'] crt_net['model.9.weight'] = state_dict['conv_up3.weight']
crt_net['model.9.bias'] = state_dict['conv_up3.bias'] crt_net['model.9.bias'] = state_dict['conv_up3.bias']
@ -93,7 +87,7 @@ def resrgan2normal(state_dict, nb=23):
def infer_params(state_dict): def infer_params(state_dict):
# this code is copied from https://github.com/victorca25/iNNfer # this code is copied from https://ghproxy.com/https://github.com/victorca25/iNNfer
scale2x = 0 scale2x = 0
scalemin = 6 scalemin = 6
n_uplayer = 0 n_uplayer = 0
@ -127,7 +121,7 @@ def infer_params(state_dict):
class UpscalerESRGAN(Upscaler): class UpscalerESRGAN(Upscaler):
def __init__(self, dirname): def __init__(self, dirname):
self.name = "ESRGAN" self.name = "ESRGAN"
self.model_url = "https://github.com/cszn/KAIR/releases/download/v1.0/ESRGAN.pth" self.model_url = "https://ghproxy.com/https://github.com/cszn/KAIR/releases/download/v1.0/ESRGAN.pth"
self.model_name = "ESRGAN_4x" self.model_name = "ESRGAN_4x"
self.scalers = [] self.scalers = []
self.user_path = dirname self.user_path = dirname
@ -138,7 +132,7 @@ class UpscalerESRGAN(Upscaler):
scaler_data = UpscalerData(self.model_name, self.model_url, self, 4) scaler_data = UpscalerData(self.model_name, self.model_url, self, 4)
scalers.append(scaler_data) scalers.append(scaler_data)
for file in model_paths: for file in model_paths:
if "http" in file: if file.startswith("http"):
name = self.model_name name = self.model_name
else: else:
name = modelloader.friendly_name(file) name = modelloader.friendly_name(file)
@ -147,23 +141,25 @@ class UpscalerESRGAN(Upscaler):
self.scalers.append(scaler_data) self.scalers.append(scaler_data)
def do_upscale(self, img, selected_model): def do_upscale(self, img, selected_model):
try:
model = self.load_model(selected_model) model = self.load_model(selected_model)
if model is None: except Exception as e:
print(f"Unable to load ESRGAN model {selected_model}: {e}", file=sys.stderr)
return img return img
model.to(devices.device_esrgan) model.to(devices.device_esrgan)
img = esrgan_upscale(model, img) img = esrgan_upscale(model, img)
return img return img
def load_model(self, path: str): def load_model(self, path: str):
if "http" in path: if path.startswith("http"):
filename = load_file_from_url(url=self.model_url, model_dir=self.model_path, # TODO: this doesn't use `path` at all?
file_name="%s.pth" % self.model_name, filename = modelloader.load_file_from_url(
progress=True) url=self.model_url,
model_dir=self.model_download_path,
file_name=f"{self.model_name}.pth",
)
else: else:
filename = path filename = path
if not os.path.exists(filename) or filename is None:
print("Unable to load %s from %s" % (self.model_path, filename))
return None
state_dict = torch.load(filename, map_location='cpu' if devices.device_esrgan.type == 'mps' else None) state_dict = torch.load(filename, map_location='cpu' if devices.device_esrgan.type == 'mps' else None)

View File

@ -1,8 +1,7 @@
# this file is adapted from https://github.com/victorca25/iNNfer # this file is adapted from https://ghproxy.com/https://github.com/victorca25/iNNfer
from collections import OrderedDict from collections import OrderedDict
import math import math
import functools
import torch import torch
import torch.nn as nn import torch.nn as nn
import torch.nn.functional as F import torch.nn.functional as F
@ -38,7 +37,7 @@ class RRDBNet(nn.Module):
elif upsample_mode == 'pixelshuffle': elif upsample_mode == 'pixelshuffle':
upsample_block = pixelshuffle_block upsample_block = pixelshuffle_block
else: else:
raise NotImplementedError('upsample mode [{:s}] is not found'.format(upsample_mode)) raise NotImplementedError(f'upsample mode [{upsample_mode}] is not found')
if upscale == 3: if upscale == 3:
upsampler = upsample_block(nf, nf, 3, act_type=act_type, convtype=convtype) upsampler = upsample_block(nf, nf, 3, act_type=act_type, convtype=convtype)
else: else:
@ -183,7 +182,7 @@ def conv1x1(in_planes, out_planes, stride=1):
class SRVGGNetCompact(nn.Module): class SRVGGNetCompact(nn.Module):
"""A compact VGG-style network structure for super-resolution. """A compact VGG-style network structure for super-resolution.
This class is copied from https://github.com/xinntao/Real-ESRGAN This class is copied from https://ghproxy.com/https://github.com/xinntao/Real-ESRGAN
""" """
def __init__(self, num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu'): def __init__(self, num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu'):
@ -261,10 +260,10 @@ class Upsample(nn.Module):
def extra_repr(self): def extra_repr(self):
if self.scale_factor is not None: if self.scale_factor is not None:
info = 'scale_factor=' + str(self.scale_factor) info = f'scale_factor={self.scale_factor}'
else: else:
info = 'size=' + str(self.size) info = f'size={self.size}'
info += ', mode=' + self.mode info += f', mode={self.mode}'
return info return info
@ -350,7 +349,7 @@ def act(act_type, inplace=True, neg_slope=0.2, n_prelu=1, beta=1.0):
elif act_type == 'sigmoid': # [0, 1] range output elif act_type == 'sigmoid': # [0, 1] range output
layer = nn.Sigmoid() layer = nn.Sigmoid()
else: else:
raise NotImplementedError('activation layer [{:s}] is not found'.format(act_type)) raise NotImplementedError(f'activation layer [{act_type}] is not found')
return layer return layer
@ -372,7 +371,7 @@ def norm(norm_type, nc):
elif norm_type == 'none': elif norm_type == 'none':
def norm_layer(x): return Identity() def norm_layer(x): return Identity()
else: else:
raise NotImplementedError('normalization layer [{:s}] is not found'.format(norm_type)) raise NotImplementedError(f'normalization layer [{norm_type}] is not found')
return layer return layer
@ -388,7 +387,7 @@ def pad(pad_type, padding):
elif pad_type == 'zero': elif pad_type == 'zero':
layer = nn.ZeroPad2d(padding) layer = nn.ZeroPad2d(padding)
else: else:
raise NotImplementedError('padding layer [{:s}] is not implemented'.format(pad_type)) raise NotImplementedError(f'padding layer [{pad_type}] is not implemented')
return layer return layer
@ -432,15 +431,17 @@ def conv_block(in_nc, out_nc, kernel_size, stride=1, dilation=1, groups=1, bias=
pad_type='zero', norm_type=None, act_type='relu', mode='CNA', convtype='Conv2D', pad_type='zero', norm_type=None, act_type='relu', mode='CNA', convtype='Conv2D',
spectral_norm=False): spectral_norm=False):
""" Conv layer with padding, normalization, activation """ """ Conv layer with padding, normalization, activation """
assert mode in ['CNA', 'NAC', 'CNAC'], 'Wrong conv mode [{:s}]'.format(mode) assert mode in ['CNA', 'NAC', 'CNAC'], f'Wrong conv mode [{mode}]'
padding = get_valid_padding(kernel_size, dilation) padding = get_valid_padding(kernel_size, dilation)
p = pad(pad_type, padding) if pad_type and pad_type != 'zero' else None p = pad(pad_type, padding) if pad_type and pad_type != 'zero' else None
padding = padding if pad_type == 'zero' else 0 padding = padding if pad_type == 'zero' else 0
if convtype=='PartialConv2D': if convtype=='PartialConv2D':
from torchvision.ops import PartialConv2d # this is definitely not going to work, but PartialConv2d doesn't work anyway and this shuts up static analyzer
c = PartialConv2d(in_nc, out_nc, kernel_size=kernel_size, stride=stride, padding=padding, c = PartialConv2d(in_nc, out_nc, kernel_size=kernel_size, stride=stride, padding=padding,
dilation=dilation, bias=bias, groups=groups) dilation=dilation, bias=bias, groups=groups)
elif convtype=='DeformConv2D': elif convtype=='DeformConv2D':
from torchvision.ops import DeformConv2d # not tested
c = DeformConv2d(in_nc, out_nc, kernel_size=kernel_size, stride=stride, padding=padding, c = DeformConv2d(in_nc, out_nc, kernel_size=kernel_size, stride=stride, padding=padding,
dilation=dilation, bias=bias, groups=groups) dilation=dilation, bias=bias, groups=groups)
elif convtype=='Conv3D': elif convtype=='Conv3D':

View File

@ -1,17 +1,13 @@
import os import os
import sys import threading
import traceback
import time from modules import shared, errors, cache
import git from modules.gitpython_hack import Repo
from modules.paths_internal import extensions_dir, extensions_builtin_dir, script_path # noqa: F401
from modules import shared
from modules.paths_internal import extensions_dir, extensions_builtin_dir
extensions = [] extensions = []
if not os.path.exists(extensions_dir): os.makedirs(extensions_dir, exist_ok=True)
os.makedirs(extensions_dir)
def active(): def active():
@ -24,6 +20,9 @@ def active():
class Extension: class Extension:
lock = threading.Lock()
cached_fields = ['remote', 'commit_date', 'branch', 'commit_hash', 'version']
def __init__(self, name, path, enabled=True, is_builtin=False): def __init__(self, name, path, enabled=True, is_builtin=False):
self.name = name self.name = name
self.path = path self.path = path
@ -31,37 +30,65 @@ class Extension:
self.status = '' self.status = ''
self.can_update = False self.can_update = False
self.is_builtin = is_builtin self.is_builtin = is_builtin
self.commit_hash = ''
self.commit_date = None
self.version = '' self.version = ''
self.branch = None
self.remote = None self.remote = None
self.have_info_from_repo = False self.have_info_from_repo = False
def to_dict(self):
return {x: getattr(self, x) for x in self.cached_fields}
def from_dict(self, d):
for field in self.cached_fields:
setattr(self, field, d[field])
def read_info_from_repo(self): def read_info_from_repo(self):
if self.is_builtin or self.have_info_from_repo:
return
def read_from_repo():
with self.lock:
if self.have_info_from_repo: if self.have_info_from_repo:
return return
self.have_info_from_repo = True self.do_read_info_from_repo()
return self.to_dict()
try:
d = cache.cached_data_for_file('extensions-git', self.name, os.path.join(self.path, ".git"), read_from_repo)
self.from_dict(d)
except FileNotFoundError:
pass
self.status = 'unknown' if self.status == '' else self.status
def do_read_info_from_repo(self):
repo = None repo = None
try: try:
if os.path.exists(os.path.join(self.path, ".git")): if os.path.exists(os.path.join(self.path, ".git")):
repo = git.Repo(self.path) repo = Repo(self.path)
except Exception: except Exception:
print(f"Error reading github repository info from {self.path}:", file=sys.stderr) errors.report(f"Error reading github repository info from {self.path}", exc_info=True)
print(traceback.format_exc(), file=sys.stderr)
if repo is None or repo.bare: if repo is None or repo.bare:
self.remote = None self.remote = None
else: else:
try: try:
self.status = 'unknown'
self.remote = next(repo.remote().urls, None) self.remote = next(repo.remote().urls, None)
head = repo.head.commit commit = repo.head.commit
ts = time.asctime(time.gmtime(repo.head.commit.committed_date)) self.commit_date = commit.committed_date
self.version = f'{head.hexsha[:8]} ({ts})' if repo.active_branch:
self.branch = repo.active_branch.name
self.commit_hash = commit.hexsha
self.version = self.commit_hash[:8]
except Exception: except Exception:
errors.report(f"Failed reading extension data from Git repository ({self.name})", exc_info=True)
self.remote = None self.remote = None
self.have_info_from_repo = True
def list_files(self, subdir, extension): def list_files(self, subdir, extension):
from modules import scripts from modules import scripts
@ -78,22 +105,34 @@ class Extension:
return res return res
def check_updates(self): def check_updates(self):
repo = git.Repo(self.path) repo = Repo(self.path)
for fetch in repo.remote().fetch(dry_run=True): for fetch in repo.remote().fetch(dry_run=True):
if fetch.flags != fetch.HEAD_UPTODATE: if fetch.flags != fetch.HEAD_UPTODATE:
self.can_update = True self.can_update = True
self.status = "behind" self.status = "new commits"
return
try:
origin = repo.rev_parse('origin')
if repo.head.commit != origin:
self.can_update = True
self.status = "behind HEAD"
return
except Exception:
self.can_update = False
self.status = "unknown (remote error)"
return return
self.can_update = False self.can_update = False
self.status = "latest" self.status = "latest"
def fetch_and_reset_hard(self): def fetch_and_reset_hard(self, commit='origin'):
repo = git.Repo(self.path) repo = Repo(self.path)
# Fix: `error: Your local changes to the following files would be overwritten by merge`, # Fix: `error: Your local changes to the following files would be overwritten by merge`,
# because WSL2 Docker set 755 file permissions instead of 644, this results to the error. # because WSL2 Docker set 755 file permissions instead of 644, this results to the error.
repo.git.fetch(all=True) repo.git.fetch(all=True)
repo.git.reset('origin', hard=True) repo.git.reset(commit, hard=True)
self.have_info_from_repo = False
def list_extensions(): def list_extensions():

View File

@ -4,19 +4,42 @@ from collections import defaultdict
from modules import errors from modules import errors
extra_network_registry = {} extra_network_registry = {}
extra_network_aliases = {}
def initialize(): def initialize():
extra_network_registry.clear() extra_network_registry.clear()
extra_network_aliases.clear()
def register_extra_network(extra_network): def register_extra_network(extra_network):
extra_network_registry[extra_network.name] = extra_network extra_network_registry[extra_network.name] = extra_network
def register_extra_network_alias(extra_network, alias):
extra_network_aliases[alias] = extra_network
def register_default_extra_networks():
from modules.extra_networks_hypernet import ExtraNetworkHypernet
register_extra_network(ExtraNetworkHypernet())
class ExtraNetworkParams: class ExtraNetworkParams:
def __init__(self, items=None): def __init__(self, items=None):
self.items = items or [] self.items = items or []
self.positional = []
self.named = {}
for item in self.items:
parts = item.split('=', 2) if isinstance(item, str) else [item]
if len(parts) == 2:
self.named[parts[0]] = parts[1]
else:
self.positional.append(item)
def __eq__(self, other):
return self.items == other.items
class ExtraNetwork: class ExtraNetwork:
@ -65,20 +88,26 @@ def activate(p, extra_network_data):
"""call activate for extra networks in extra_network_data in specified order, then call """call activate for extra networks in extra_network_data in specified order, then call
activate for all remaining registered networks with an empty argument list""" activate for all remaining registered networks with an empty argument list"""
activated = []
for extra_network_name, extra_network_args in extra_network_data.items(): for extra_network_name, extra_network_args in extra_network_data.items():
extra_network = extra_network_registry.get(extra_network_name, None) extra_network = extra_network_registry.get(extra_network_name, None)
if extra_network is None:
extra_network = extra_network_aliases.get(extra_network_name, None)
if extra_network is None: if extra_network is None:
print(f"Skipping unknown extra network: {extra_network_name}") print(f"Skipping unknown extra network: {extra_network_name}")
continue continue
try: try:
extra_network.activate(p, extra_network_args) extra_network.activate(p, extra_network_args)
activated.append(extra_network)
except Exception as e: except Exception as e:
errors.display(e, f"activating extra network {extra_network_name} with arguments {extra_network_args}") errors.display(e, f"activating extra network {extra_network_name} with arguments {extra_network_args}")
for extra_network_name, extra_network in extra_network_registry.items(): for extra_network_name, extra_network in extra_network_registry.items():
args = extra_network_data.get(extra_network_name, None) if extra_network in activated:
if args is not None:
continue continue
try: try:
@ -86,12 +115,15 @@ def activate(p, extra_network_data):
except Exception as e: except Exception as e:
errors.display(e, f"activating extra network {extra_network_name}") errors.display(e, f"activating extra network {extra_network_name}")
if p.scripts is not None:
p.scripts.after_extra_networks_activate(p, batch_number=p.iteration, prompts=p.prompts, seeds=p.seeds, subseeds=p.subseeds, extra_network_data=extra_network_data)
def deactivate(p, extra_network_data): def deactivate(p, extra_network_data):
"""call deactivate for extra networks in extra_network_data in specified order, then call """call deactivate for extra networks in extra_network_data in specified order, then call
deactivate for all remaining registered networks""" deactivate for all remaining registered networks"""
for extra_network_name, extra_network_args in extra_network_data.items(): for extra_network_name in extra_network_data:
extra_network = extra_network_registry.get(extra_network_name, None) extra_network = extra_network_registry.get(extra_network_name, None)
if extra_network is None: if extra_network is None:
continue continue

View File

@ -1,4 +1,4 @@
from modules import extra_networks, shared, extra_networks from modules import extra_networks, shared
from modules.hypernetworks import hypernetwork from modules.hypernetworks import hypernetwork
@ -9,14 +9,15 @@ class ExtraNetworkHypernet(extra_networks.ExtraNetwork):
def activate(self, p, params_list): def activate(self, p, params_list):
additional = shared.opts.sd_hypernetwork additional = shared.opts.sd_hypernetwork
if additional != "" and additional in shared.hypernetworks and len([x for x in params_list if x.items[0] == additional]) == 0: if additional != "None" and additional in shared.hypernetworks and not any(x for x in params_list if x.items[0] == additional):
p.all_prompts = [x + f"<hypernet:{additional}:{shared.opts.extra_networks_default_multiplier}>" for x in p.all_prompts] hypernet_prompt_text = f"<hypernet:{additional}:{shared.opts.extra_networks_default_multiplier}>"
p.all_prompts = [f"{prompt}{hypernet_prompt_text}" for prompt in p.all_prompts]
params_list.append(extra_networks.ExtraNetworkParams(items=[additional, shared.opts.extra_networks_default_multiplier])) params_list.append(extra_networks.ExtraNetworkParams(items=[additional, shared.opts.extra_networks_default_multiplier]))
names = [] names = []
multipliers = [] multipliers = []
for params in params_list: for params in params_list:
assert len(params.items) > 0 assert params.items
names.append(params.items[0]) names.append(params.items[0])
multipliers.append(float(params.items[1]) if len(params.items) > 1 else 1.0) multipliers.append(float(params.items[1]) if len(params.items) > 1 else 1.0)

View File

@ -1,6 +1,7 @@
import os import os
import re import re
import shutil import shutil
import json
import torch import torch
@ -71,9 +72,8 @@ def to_half(tensor, enable):
return tensor return tensor
def run_modelmerger(id_task, primary_model_name, secondary_model_name, tertiary_model_name, interp_method, multiplier, save_as_half, custom_name, checkpoint_format, config_source, bake_in_vae, discard_weights): def run_modelmerger(id_task, primary_model_name, secondary_model_name, tertiary_model_name, interp_method, multiplier, save_as_half, custom_name, checkpoint_format, config_source, bake_in_vae, discard_weights, save_metadata):
shared.state.begin() shared.state.begin(job="model-merge")
shared.state.job = 'model-merge'
def fail(message): def fail(message):
shared.state.textinfo = message shared.state.textinfo = message
@ -135,14 +135,14 @@ def run_modelmerger(id_task, primary_model_name, secondary_model_name, tertiary_
result_is_instruct_pix2pix_model = False result_is_instruct_pix2pix_model = False
if theta_func2: if theta_func2:
shared.state.textinfo = f"Loading B" shared.state.textinfo = "Loading B"
print(f"Loading {secondary_model_info.filename}...") print(f"Loading {secondary_model_info.filename}...")
theta_1 = sd_models.read_state_dict(secondary_model_info.filename, map_location='cpu') theta_1 = sd_models.read_state_dict(secondary_model_info.filename, map_location='cpu')
else: else:
theta_1 = None theta_1 = None
if theta_func1: if theta_func1:
shared.state.textinfo = f"Loading C" shared.state.textinfo = "Loading C"
print(f"Loading {tertiary_model_info.filename}...") print(f"Loading {tertiary_model_info.filename}...")
theta_2 = sd_models.read_state_dict(tertiary_model_info.filename, map_location='cpu') theta_2 = sd_models.read_state_dict(tertiary_model_info.filename, map_location='cpu')
@ -241,13 +241,58 @@ def run_modelmerger(id_task, primary_model_name, secondary_model_name, tertiary_
shared.state.textinfo = "Saving" shared.state.textinfo = "Saving"
print(f"Saving to {output_modelname}...") print(f"Saving to {output_modelname}...")
metadata = None
if save_metadata:
metadata = {"format": "pt"}
merge_recipe = {
"type": "webui", # indicate this model was merged with webui's built-in merger
"primary_model_hash": primary_model_info.sha256,
"secondary_model_hash": secondary_model_info.sha256 if secondary_model_info else None,
"tertiary_model_hash": tertiary_model_info.sha256 if tertiary_model_info else None,
"interp_method": interp_method,
"multiplier": multiplier,
"save_as_half": save_as_half,
"custom_name": custom_name,
"config_source": config_source,
"bake_in_vae": bake_in_vae,
"discard_weights": discard_weights,
"is_inpainting": result_is_inpainting_model,
"is_instruct_pix2pix": result_is_instruct_pix2pix_model
}
metadata["sd_merge_recipe"] = json.dumps(merge_recipe)
sd_merge_models = {}
def add_model_metadata(checkpoint_info):
checkpoint_info.calculate_shorthash()
sd_merge_models[checkpoint_info.sha256] = {
"name": checkpoint_info.name,
"legacy_hash": checkpoint_info.hash,
"sd_merge_recipe": checkpoint_info.metadata.get("sd_merge_recipe", None)
}
sd_merge_models.update(checkpoint_info.metadata.get("sd_merge_models", {}))
add_model_metadata(primary_model_info)
if secondary_model_info:
add_model_metadata(secondary_model_info)
if tertiary_model_info:
add_model_metadata(tertiary_model_info)
metadata["sd_merge_models"] = json.dumps(sd_merge_models)
_, extension = os.path.splitext(output_modelname) _, extension = os.path.splitext(output_modelname)
if extension.lower() == ".safetensors": if extension.lower() == ".safetensors":
safetensors.torch.save_file(theta_0, output_modelname, metadata={"format": "pt"}) safetensors.torch.save_file(theta_0, output_modelname, metadata=metadata)
else: else:
torch.save(theta_0, output_modelname) torch.save(theta_0, output_modelname)
sd_models.list_models() sd_models.list_models()
created_model = next((ckpt for ckpt in sd_models.checkpoints_list.values() if ckpt.name == filename), None)
if created_model:
created_model.calculate_shorthash()
create_config(output_modelname, config_source, primary_model_info, secondary_model_info, tertiary_model_info) create_config(output_modelname, config_source, primary_model_info, secondary_model_info, tertiary_model_info)

View File

@ -1,15 +1,12 @@
import base64 import base64
import html
import io import io
import math import json
import os import os
import re import re
from pathlib import Path
import gradio as gr import gradio as gr
from modules.paths import data_path from modules.paths import data_path
from modules import shared, ui_tempdir, script_callbacks from modules import shared, ui_tempdir, script_callbacks
import tempfile
from PIL import Image from PIL import Image
re_param_code = r'\s*([\w ]+):\s*("(?:\\"[^,]|\\"|\\|[^\"])+"|[^,]*)(?:,|$)' re_param_code = r'\s*([\w ]+):\s*("(?:\\"[^,]|\\"|\\|[^\"])+"|[^,]*)(?:,|$)'
@ -23,14 +20,14 @@ registered_param_bindings = []
class ParamBinding: class ParamBinding:
def __init__(self, paste_button, tabname, source_text_component=None, source_image_component=None, source_tabname=None, override_settings_component=None, paste_field_names=[]): def __init__(self, paste_button, tabname, source_text_component=None, source_image_component=None, source_tabname=None, override_settings_component=None, paste_field_names=None):
self.paste_button = paste_button self.paste_button = paste_button
self.tabname = tabname self.tabname = tabname
self.source_text_component = source_text_component self.source_text_component = source_text_component
self.source_image_component = source_image_component self.source_image_component = source_image_component
self.source_tabname = source_tabname self.source_tabname = source_tabname
self.override_settings_component = override_settings_component self.override_settings_component = override_settings_component
self.paste_field_names = paste_field_names self.paste_field_names = paste_field_names or []
def reset(): def reset():
@ -38,20 +35,27 @@ def reset():
def quote(text): def quote(text):
if ',' not in str(text): if ',' not in str(text) and '\n' not in str(text) and ':' not in str(text):
return text return text
text = str(text) return json.dumps(text, ensure_ascii=False)
text = text.replace('\\', '\\\\')
text = text.replace('"', '\\"')
return f'"{text}"' def unquote(text):
if len(text) == 0 or text[0] != '"' or text[-1] != '"':
return text
try:
return json.loads(text)
except Exception:
return text
def image_from_url_text(filedata): def image_from_url_text(filedata):
if filedata is None: if filedata is None:
return None return None
if type(filedata) == list and len(filedata) > 0 and type(filedata[0]) == dict and filedata[0].get("is_file", False): if type(filedata) == list and filedata and type(filedata[0]) == dict and filedata[0].get("is_file", False):
filedata = filedata[0] filedata = filedata[0]
if type(filedata) == dict and filedata.get("is_file", False): if type(filedata) == dict and filedata.get("is_file", False):
@ -59,6 +63,7 @@ def image_from_url_text(filedata):
is_in_right_dir = ui_tempdir.check_tmp_file(shared.demo, filename) is_in_right_dir = ui_tempdir.check_tmp_file(shared.demo, filename)
assert is_in_right_dir, 'trying to open image file outside of allowed directories' assert is_in_right_dir, 'trying to open image file outside of allowed directories'
filename = filename.rsplit('?', 1)[0]
return Image.open(filename) return Image.open(filename)
if type(filedata) == list: if type(filedata) == list:
@ -129,6 +134,7 @@ def connect_paste_params_buttons():
_js=jsfunc, _js=jsfunc,
inputs=[binding.source_image_component], inputs=[binding.source_image_component],
outputs=[destination_image_component, destination_width_component, destination_height_component] if destination_width_component else [destination_image_component], outputs=[destination_image_component, destination_width_component, destination_height_component] if destination_width_component else [destination_image_component],
show_progress=False,
) )
if binding.source_text_component is not None and fields is not None: if binding.source_text_component is not None and fields is not None:
@ -140,6 +146,7 @@ def connect_paste_params_buttons():
fn=lambda *x: x, fn=lambda *x: x,
inputs=[field for field, name in paste_fields[binding.source_tabname]["fields"] if name in paste_field_names], inputs=[field for field, name in paste_fields[binding.source_tabname]["fields"] if name in paste_field_names],
outputs=[field for field, name in fields if name in paste_field_names], outputs=[field for field, name in fields if name in paste_field_names],
show_progress=False,
) )
binding.paste_button.click( binding.paste_button.click(
@ -147,6 +154,7 @@ def connect_paste_params_buttons():
_js=f"switch_to_{binding.tabname}", _js=f"switch_to_{binding.tabname}",
inputs=None, inputs=None,
outputs=None, outputs=None,
show_progress=False,
) )
@ -166,31 +174,6 @@ def send_image_and_dimensions(x):
return img, w, h return img, w, h
def find_hypernetwork_key(hypernet_name, hypernet_hash=None):
"""Determines the config parameter name to use for the hypernet based on the parameters in the infotext.
Example: an infotext provides "Hypernet: ke-ta" and "Hypernet hash: 1234abcd". For the "Hypernet" config
parameter this means there should be an entry that looks like "ke-ta-10000(1234abcd)" to set it to.
If the infotext has no hash, then a hypernet with the same name will be selected instead.
"""
hypernet_name = hypernet_name.lower()
if hypernet_hash is not None:
# Try to match the hash in the name
for hypernet_key in shared.hypernetworks.keys():
result = re_hypernet_hash.search(hypernet_key)
if result is not None and result[1] == hypernet_hash:
return hypernet_key
else:
# Fall back to a hypernet with the same name
for hypernet_key in shared.hypernetworks.keys():
if hypernet_key.lower().startswith(hypernet_name):
return hypernet_key
return None
def restore_old_hires_fix_params(res): def restore_old_hires_fix_params(res):
"""for infotexts that specify old First pass size parameter, convert it into """for infotexts that specify old First pass size parameter, convert it into
width, height, and hr scale""" width, height, and hr scale"""
@ -247,28 +230,40 @@ Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 965400086, Size: 512x512, Model
lines.append(lastline) lines.append(lastline)
lastline = '' lastline = ''
for i, line in enumerate(lines): for line in lines:
line = line.strip() line = line.strip()
if line.startswith("Negative prompt:"): if line.startswith("Negative prompt:"):
done_with_prompt = True done_with_prompt = True
line = line[16:].strip() line = line[16:].strip()
if done_with_prompt: if done_with_prompt:
negative_prompt += ("" if negative_prompt == "" else "\n") + line negative_prompt += ("" if negative_prompt == "" else "\n") + line
else: else:
prompt += ("" if prompt == "" else "\n") + line prompt += ("" if prompt == "" else "\n") + line
if shared.opts.infotext_styles != "Ignore":
found_styles, prompt, negative_prompt = shared.prompt_styles.extract_styles_from_prompt(prompt, negative_prompt)
if shared.opts.infotext_styles == "Apply":
res["Styles array"] = found_styles
elif shared.opts.infotext_styles == "Apply if any" and found_styles:
res["Styles array"] = found_styles
res["Prompt"] = prompt res["Prompt"] = prompt
res["Negative prompt"] = negative_prompt res["Negative prompt"] = negative_prompt
for k, v in re_param.findall(lastline): for k, v in re_param.findall(lastline):
v = v[1:-1] if v[0] == '"' and v[-1] == '"' else v try:
if v[0] == '"' and v[-1] == '"':
v = unquote(v)
m = re_imagesize.match(v) m = re_imagesize.match(v)
if m is not None: if m is not None:
res[k+"-1"] = m.group(1) res[f"{k}-1"] = m.group(1)
res[k+"-2"] = m.group(2) res[f"{k}-2"] = m.group(2)
else: else:
res[k] = v res[k] = v
except Exception:
print(f"Error parsing \"{k}: {v}\"")
# Missing CLIP skip means it was set to 1 (the default) # Missing CLIP skip means it was set to 1 (the default)
if "Clip skip" not in res: if "Clip skip" not in res:
@ -282,20 +277,45 @@ Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 965400086, Size: 512x512, Model
res["Hires resize-1"] = 0 res["Hires resize-1"] = 0
res["Hires resize-2"] = 0 res["Hires resize-2"] = 0
if "Hires sampler" not in res:
res["Hires sampler"] = "Use same sampler"
if "Hires prompt" not in res:
res["Hires prompt"] = ""
if "Hires negative prompt" not in res:
res["Hires negative prompt"] = ""
restore_old_hires_fix_params(res) restore_old_hires_fix_params(res)
# Missing RNG means the default was set, which is GPU RNG
if "RNG" not in res:
res["RNG"] = "GPU"
if "Schedule type" not in res:
res["Schedule type"] = "Automatic"
if "Schedule max sigma" not in res:
res["Schedule max sigma"] = 0
if "Schedule min sigma" not in res:
res["Schedule min sigma"] = 0
if "Schedule rho" not in res:
res["Schedule rho"] = 0
return res return res
settings_map = {}
infotext_to_setting_name_mapping = [ infotext_to_setting_name_mapping = [
('Clip skip', 'CLIP_stop_at_last_layers', ), ('Clip skip', 'CLIP_stop_at_last_layers', ),
('Conditional mask weight', 'inpainting_mask_weight'), ('Conditional mask weight', 'inpainting_mask_weight'),
('Model hash', 'sd_model_checkpoint'), ('Model hash', 'sd_model_checkpoint'),
('ENSD', 'eta_noise_seed_delta'), ('ENSD', 'eta_noise_seed_delta'),
('Schedule type', 'k_sched_type'),
('Schedule max sigma', 'sigma_max'),
('Schedule min sigma', 'sigma_min'),
('Schedule rho', 'rho'),
('Noise multiplier', 'initial_noise_multiplier'), ('Noise multiplier', 'initial_noise_multiplier'),
('Eta', 'eta_ancestral'), ('Eta', 'eta_ancestral'),
('Eta DDIM', 'eta_ddim'), ('Eta DDIM', 'eta_ddim'),
@ -304,6 +324,11 @@ infotext_to_setting_name_mapping = [
('UniPC skip type', 'uni_pc_skip_type'), ('UniPC skip type', 'uni_pc_skip_type'),
('UniPC order', 'uni_pc_order'), ('UniPC order', 'uni_pc_order'),
('UniPC lower order final', 'uni_pc_lower_order_final'), ('UniPC lower order final', 'uni_pc_lower_order_final'),
('Token merging ratio', 'token_merging_ratio'),
('Token merging ratio hr', 'token_merging_ratio_hr'),
('RNG', 'randn_source'),
('NGMS', 's_min_uncond'),
('Pad conds', 'pad_cond_uncond'),
] ]
@ -395,7 +420,7 @@ def connect_paste(button, paste_fields, input_comp, override_settings_component,
vals_pairs = [f"{k}: {v}" for k, v in vals.items()] vals_pairs = [f"{k}: {v}" for k, v in vals.items()]
return gr.Dropdown.update(value=vals_pairs, choices=vals_pairs, visible=len(vals_pairs) > 0) return gr.Dropdown.update(value=vals_pairs, choices=vals_pairs, visible=bool(vals_pairs))
paste_fields = paste_fields + [(override_settings_component, paste_settings)] paste_fields = paste_fields + [(override_settings_component, paste_settings)]
@ -403,12 +428,12 @@ def connect_paste(button, paste_fields, input_comp, override_settings_component,
fn=paste_func, fn=paste_func,
inputs=[input_comp], inputs=[input_comp],
outputs=[x[0] for x in paste_fields], outputs=[x[0] for x in paste_fields],
show_progress=False,
) )
button.click( button.click(
fn=None, fn=None,
_js=f"recalculate_prompts_{tabname}", _js=f"recalculate_prompts_{tabname}",
inputs=[], inputs=[],
outputs=[], outputs=[],
show_progress=False,
) )

View File

@ -1,17 +1,15 @@
import os import os
import sys
import traceback
import facexlib import facexlib
import gfpgan import gfpgan
import modules.face_restoration import modules.face_restoration
from modules import paths, shared, devices, modelloader from modules import paths, shared, devices, modelloader, errors
model_dir = "GFPGAN" model_dir = "GFPGAN"
user_path = None user_path = None
model_path = os.path.join(paths.models_path, model_dir) model_path = os.path.join(paths.models_path, model_dir)
model_url = "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth" model_url = "https://ghproxy.com/https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth"
have_gfpgan = False have_gfpgan = False
loaded_gfpgan_model = None loaded_gfpgan_model = None
@ -27,7 +25,7 @@ def gfpgann():
return None return None
models = modelloader.load_models(model_path, model_url, user_path, ext_filter="GFPGAN") models = modelloader.load_models(model_path, model_url, user_path, ext_filter="GFPGAN")
if len(models) == 1 and "http" in models[0]: if len(models) == 1 and models[0].startswith("http"):
model_file = models[0] model_file = models[0]
elif len(models) != 0: elif len(models) != 0:
latest_file = max(models, key=os.path.getctime) latest_file = max(models, key=os.path.getctime)
@ -72,13 +70,10 @@ gfpgan_constructor = None
def setup_model(dirname): def setup_model(dirname):
global model_path
if not os.path.exists(model_path):
os.makedirs(model_path)
try: try:
os.makedirs(model_path, exist_ok=True)
from gfpgan import GFPGANer from gfpgan import GFPGANer
from facexlib import detection, parsing from facexlib import detection, parsing # noqa: F401
global user_path global user_path
global have_gfpgan global have_gfpgan
global gfpgan_constructor global gfpgan_constructor
@ -112,5 +107,4 @@ def setup_model(dirname):
shared.face_restorers.append(FaceRestorerGFPGAN()) shared.face_restorers.append(FaceRestorerGFPGAN())
except Exception: except Exception:
print("Error setting up GFPGAN:", file=sys.stderr) errors.report("Error setting up GFPGAN", exc_info=True)
print(traceback.format_exc(), file=sys.stderr)

42
modules/gitpython_hack.py Normal file
View File

@ -0,0 +1,42 @@
from __future__ import annotations
import io
import subprocess
import git
class Git(git.Git):
"""
Git subclassed to never use persistent processes.
"""
def _get_persistent_cmd(self, attr_name, cmd_name, *args, **kwargs):
raise NotImplementedError(f"Refusing to use persistent process: {attr_name} ({cmd_name} {args} {kwargs})")
def get_object_header(self, ref: str | bytes) -> tuple[str, str, int]:
ret = subprocess.check_output(
[self.GIT_PYTHON_GIT_EXECUTABLE, "cat-file", "--batch-check"],
input=self._prepare_ref(ref),
cwd=self._working_dir,
timeout=2,
)
return self._parse_object_header(ret)
def stream_object_data(self, ref: str) -> tuple[str, str, int, "Git.CatFileContentStream"]:
# Not really streaming, per se; this buffers the entire object in memory.
# Shouldn't be a problem for our use case, since we're only using this for
# object headers (commit objects).
ret = subprocess.check_output(
[self.GIT_PYTHON_GIT_EXECUTABLE, "cat-file", "--batch"],
input=self._prepare_ref(ref),
cwd=self._working_dir,
timeout=30,
)
bio = io.BytesIO(ret)
hexsha, typename, size = self._parse_object_header(bio.readline())
return (hexsha, typename, size, self.CatFileContentStream(size, bio))
class Repo(git.Repo):
GitCommandWrapperType = Git

View File

@ -1,38 +1,11 @@
import hashlib import hashlib
import json
import os.path import os.path
import filelock
from modules import shared from modules import shared
from modules.paths import data_path import modules.cache
dump_cache = modules.cache.dump_cache
cache_filename = os.path.join(data_path, "cache.json") cache = modules.cache.cache
cache_data = None
def dump_cache():
with filelock.FileLock(cache_filename+".lock"):
with open(cache_filename, "w", encoding="utf8") as file:
json.dump(cache_data, file, indent=4)
def cache(subsection):
global cache_data
if cache_data is None:
with filelock.FileLock(cache_filename+".lock"):
if not os.path.isfile(cache_filename):
cache_data = {}
else:
with open(cache_filename, "r", encoding="utf8") as file:
cache_data = json.load(file)
s = cache_data.get(subsection, {})
cache_data[subsection] = s
return s
def calculate_sha256(filename): def calculate_sha256(filename):
@ -46,8 +19,8 @@ def calculate_sha256(filename):
return hash_sha256.hexdigest() return hash_sha256.hexdigest()
def sha256_from_cache(filename, title): def sha256_from_cache(filename, title, use_addnet_hash=False):
hashes = cache("hashes") hashes = cache("hashes-addnet") if use_addnet_hash else cache("hashes")
ondisk_mtime = os.path.getmtime(filename) ondisk_mtime = os.path.getmtime(filename)
if title not in hashes: if title not in hashes:
@ -62,10 +35,10 @@ def sha256_from_cache(filename, title):
return cached_sha256 return cached_sha256
def sha256(filename, title): def sha256(filename, title, use_addnet_hash=False):
hashes = cache("hashes") hashes = cache("hashes-addnet") if use_addnet_hash else cache("hashes")
sha256_value = sha256_from_cache(filename, title) sha256_value = sha256_from_cache(filename, title, use_addnet_hash)
if sha256_value is not None: if sha256_value is not None:
return sha256_value return sha256_value
@ -73,6 +46,10 @@ def sha256(filename, title):
return None return None
print(f"Calculating sha256 for {filename}: ", end='') print(f"Calculating sha256 for {filename}: ", end='')
if use_addnet_hash:
with open(filename, "rb") as file:
sha256_value = addnet_hash_safetensors(file)
else:
sha256_value = calculate_sha256(filename) sha256_value = calculate_sha256(filename)
print(f"{sha256_value}") print(f"{sha256_value}")
@ -86,6 +63,19 @@ def sha256(filename, title):
return sha256_value return sha256_value
def addnet_hash_safetensors(b):
"""kohya-ss hash for safetensors from https://ghproxy.com/https://github.com/kohya-ss/sd-scripts/blob/main/library/train_util.py"""
hash_sha256 = hashlib.sha256()
blksize = 1024 * 1024
b.seek(0)
header = b.read(8)
n = int.from_bytes(header, "little")
offset = n + 8
b.seek(offset)
for chunk in iter(lambda: b.read(blksize), b""):
hash_sha256.update(chunk)
return hash_sha256.hexdigest()

View File

@ -1,24 +1,22 @@
import csv
import datetime import datetime
import glob import glob
import html import html
import os import os
import sys
import traceback
import inspect import inspect
from contextlib import closing
import modules.textual_inversion.dataset import modules.textual_inversion.dataset
import torch import torch
import tqdm import tqdm
from einops import rearrange, repeat from einops import rearrange, repeat
from ldm.util import default from ldm.util import default
from modules import devices, processing, sd_models, shared, sd_samplers, hashes, sd_hijack_checkpoint from modules import devices, processing, sd_models, shared, sd_samplers, hashes, sd_hijack_checkpoint, errors
from modules.textual_inversion import textual_inversion, logging from modules.textual_inversion import textual_inversion, logging
from modules.textual_inversion.learn_schedule import LearnRateScheduler from modules.textual_inversion.learn_schedule import LearnRateScheduler
from torch import einsum from torch import einsum
from torch.nn.init import normal_, xavier_normal_, xavier_uniform_, kaiming_normal_, kaiming_uniform_, zeros_ from torch.nn.init import normal_, xavier_normal_, xavier_uniform_, kaiming_normal_, kaiming_uniform_, zeros_
from collections import defaultdict, deque from collections import deque
from statistics import stdev, mean from statistics import stdev, mean
@ -178,34 +176,34 @@ class Hypernetwork:
def weights(self): def weights(self):
res = [] res = []
for k, layers in self.layers.items(): for layers in self.layers.values():
for layer in layers: for layer in layers:
res += layer.parameters() res += layer.parameters()
return res return res
def train(self, mode=True): def train(self, mode=True):
for k, layers in self.layers.items(): for layers in self.layers.values():
for layer in layers: for layer in layers:
layer.train(mode=mode) layer.train(mode=mode)
for param in layer.parameters(): for param in layer.parameters():
param.requires_grad = mode param.requires_grad = mode
def to(self, device): def to(self, device):
for k, layers in self.layers.items(): for layers in self.layers.values():
for layer in layers: for layer in layers:
layer.to(device) layer.to(device)
return self return self
def set_multiplier(self, multiplier): def set_multiplier(self, multiplier):
for k, layers in self.layers.items(): for layers in self.layers.values():
for layer in layers: for layer in layers:
layer.multiplier = multiplier layer.multiplier = multiplier
return self return self
def eval(self): def eval(self):
for k, layers in self.layers.items(): for layers in self.layers.values():
for layer in layers: for layer in layers:
layer.eval() layer.eval()
for param in layer.parameters(): for param in layer.parameters():
@ -326,16 +324,13 @@ def load_hypernetwork(name):
if path is None: if path is None:
return None return None
hypernetwork = Hypernetwork()
try: try:
hypernetwork = Hypernetwork()
hypernetwork.load(path) hypernetwork.load(path)
except Exception:
print(f"Error loading hypernetwork {path}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
return None
return hypernetwork return hypernetwork
except Exception:
errors.report(f"Error loading hypernetwork {path}", exc_info=True)
return None
def load_hypernetworks(names, multipliers=None): def load_hypernetworks(names, multipliers=None):
@ -359,17 +354,6 @@ def load_hypernetworks(names, multipliers=None):
shared.loaded_hypernetworks.append(hypernetwork) shared.loaded_hypernetworks.append(hypernetwork)
def find_closest_hypernetwork_name(search: str):
if not search:
return None
search = search.lower()
applicable = [name for name in shared.hypernetworks if search in name.lower()]
if not applicable:
return None
applicable = sorted(applicable, key=lambda name: len(name))
return applicable[0]
def apply_single_hypernetwork(hypernetwork, context_k, context_v, layer=None): def apply_single_hypernetwork(hypernetwork, context_k, context_v, layer=None):
hypernetwork_layers = (hypernetwork.layers if hypernetwork is not None else {}).get(context_k.shape[2], None) hypernetwork_layers = (hypernetwork.layers if hypernetwork is not None else {}).get(context_k.shape[2], None)
@ -394,7 +378,7 @@ def apply_hypernetworks(hypernetworks, context, layer=None):
return context_k, context_v return context_k, context_v
def attention_CrossAttention_forward(self, x, context=None, mask=None): def attention_CrossAttention_forward(self, x, context=None, mask=None, **kwargs):
h = self.heads h = self.heads
q = self.to_q(x) q = self.to_q(x)
@ -404,7 +388,7 @@ def attention_CrossAttention_forward(self, x, context=None, mask=None):
k = self.to_k(context_k) k = self.to_k(context_k)
v = self.to_v(context_v) v = self.to_v(context_v)
q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) q, k, v = (rearrange(t, 'b n (h d) -> (b h) n d', h=h) for t in (q, k, v))
sim = einsum('b i d, b j d -> b i j', q, k) * self.scale sim = einsum('b i d, b j d -> b i j', q, k) * self.scale
@ -452,18 +436,6 @@ def statistics(data):
return total_information, recent_information return total_information, recent_information
def report_statistics(loss_info:dict):
keys = sorted(loss_info.keys(), key=lambda x: sum(loss_info[x]) / len(loss_info[x]))
for key in keys:
try:
print("Loss statistics for file " + key)
info, recent = statistics(list(loss_info[key]))
print(info)
print(recent)
except Exception as e:
print(e)
def create_hypernetwork(name, enable_sizes, overwrite_old, layer_structure=None, activation_func=None, weight_init=None, add_layer_norm=False, use_dropout=False, dropout_structure=None): def create_hypernetwork(name, enable_sizes, overwrite_old, layer_structure=None, activation_func=None, weight_init=None, add_layer_norm=False, use_dropout=False, dropout_structure=None):
# Remove illegal characters from name. # Remove illegal characters from name.
name = "".join( x for x in name if (x.isalnum() or x in "._- ")) name = "".join( x for x in name if (x.isalnum() or x in "._- "))
@ -620,7 +592,7 @@ def train_hypernetwork(id_task, hypernetwork_name, learn_rate, batch_size, gradi
try: try:
sd_hijack_checkpoint.add() sd_hijack_checkpoint.add()
for i in range((steps-initial_step) * gradient_step): for _ in range((steps-initial_step) * gradient_step):
if scheduler.finished: if scheduler.finished:
break break
if shared.state.interrupted: if shared.state.interrupted:
@ -740,6 +712,7 @@ def train_hypernetwork(id_task, hypernetwork_name, learn_rate, batch_size, gradi
preview_text = p.prompt preview_text = p.prompt
with closing(p):
processed = processing.process_images(p) processed = processing.process_images(p)
image = processed.images[0] if len(processed.images) > 0 else None image = processed.images[0] if len(processed.images) > 0 else None
@ -771,12 +744,11 @@ Last saved image: {html.escape(last_saved_image)}<br/>
</p> </p>
""" """
except Exception: except Exception:
print(traceback.format_exc(), file=sys.stderr) errors.report("Exception in training hypernetwork", exc_info=True)
finally: finally:
pbar.leave = False pbar.leave = False
pbar.close() pbar.close()
hypernetwork.eval() hypernetwork.eval()
#report_statistics(loss_dict)
sd_hijack_checkpoint.remove() sd_hijack_checkpoint.remove()

View File

@ -1,19 +1,17 @@
import html import html
import os
import re
import gradio as gr import gradio as gr
import modules.hypernetworks.hypernetwork import modules.hypernetworks.hypernetwork
from modules import devices, sd_hijack, shared from modules import devices, sd_hijack, shared
not_available = ["hardswish", "multiheadattention"] not_available = ["hardswish", "multiheadattention"]
keys = list(x for x in modules.hypernetworks.hypernetwork.HypernetworkModule.activation_dict.keys() if x not in not_available) keys = [x for x in modules.hypernetworks.hypernetwork.HypernetworkModule.activation_dict if x not in not_available]
def create_hypernetwork(name, enable_sizes, overwrite_old, layer_structure=None, activation_func=None, weight_init=None, add_layer_norm=False, use_dropout=False, dropout_structure=None): def create_hypernetwork(name, enable_sizes, overwrite_old, layer_structure=None, activation_func=None, weight_init=None, add_layer_norm=False, use_dropout=False, dropout_structure=None):
filename = modules.hypernetworks.hypernetwork.create_hypernetwork(name, enable_sizes, overwrite_old, layer_structure, activation_func, weight_init, add_layer_norm, use_dropout, dropout_structure) filename = modules.hypernetworks.hypernetwork.create_hypernetwork(name, enable_sizes, overwrite_old, layer_structure, activation_func, weight_init, add_layer_norm, use_dropout, dropout_structure)
return gr.Dropdown.update(choices=sorted([x for x in shared.hypernetworks.keys()])), f"Created: {filename}", "" return gr.Dropdown.update(choices=sorted(shared.hypernetworks)), f"Created: {filename}", ""
def train_hypernetwork(*args): def train_hypernetwork(*args):

View File

@ -1,6 +1,6 @@
from __future__ import annotations
import datetime import datetime
import sys
import traceback
import pytz import pytz
import io import io
@ -12,18 +12,27 @@ import re
import numpy as np import numpy as np
import piexif import piexif
import piexif.helper import piexif.helper
from PIL import Image, ImageFont, ImageDraw, PngImagePlugin from PIL import Image, ImageFont, ImageDraw, ImageColor, PngImagePlugin
from fonts.ttf import Roboto
import string import string
import json import json
import hashlib import hashlib
from modules import sd_samplers, shared, script_callbacks, errors from modules import sd_samplers, shared, script_callbacks, errors
from modules.shared import opts, cmd_opts from modules.paths_internal import roboto_ttf_file
from modules.shared import opts
import modules.sd_vae as sd_vae
LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS) LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS)
def get_font(fontsize: int):
try:
return ImageFont.truetype(opts.font or roboto_ttf_file, fontsize)
except Exception:
return ImageFont.truetype(roboto_ttf_file, fontsize)
def image_grid(imgs, batch_size=1, rows=None): def image_grid(imgs, batch_size=1, rows=None):
if rows is None: if rows is None:
if opts.n_rows > 0: if opts.n_rows > 0:
@ -132,6 +141,11 @@ class GridAnnotation:
def draw_grid_annotations(im, width, height, hor_texts, ver_texts, margin=0): def draw_grid_annotations(im, width, height, hor_texts, ver_texts, margin=0):
color_active = ImageColor.getcolor(opts.grid_text_active_color, 'RGB')
color_inactive = ImageColor.getcolor(opts.grid_text_inactive_color, 'RGB')
color_background = ImageColor.getcolor(opts.grid_background_color, 'RGB')
def wrap(drawing, text, font, line_length): def wrap(drawing, text, font, line_length):
lines = [''] lines = ['']
for word in text.split(): for word in text.split():
@ -142,14 +156,8 @@ def draw_grid_annotations(im, width, height, hor_texts, ver_texts, margin=0):
lines.append(word) lines.append(word)
return lines return lines
def get_font(fontsize):
try:
return ImageFont.truetype(opts.font or Roboto, fontsize)
except Exception:
return ImageFont.truetype(Roboto, fontsize)
def draw_texts(drawing, draw_x, draw_y, lines, initial_fnt, initial_fontsize): def draw_texts(drawing, draw_x, draw_y, lines, initial_fnt, initial_fontsize):
for i, line in enumerate(lines): for line in lines:
fnt = initial_fnt fnt = initial_fnt
fontsize = initial_fontsize fontsize = initial_fontsize
while drawing.multiline_textsize(line.text, font=fnt)[0] > line.allowed_width and fontsize > 0: while drawing.multiline_textsize(line.text, font=fnt)[0] > line.allowed_width and fontsize > 0:
@ -167,9 +175,6 @@ def draw_grid_annotations(im, width, height, hor_texts, ver_texts, margin=0):
fnt = get_font(fontsize) fnt = get_font(fontsize)
color_active = (0, 0, 0)
color_inactive = (153, 153, 153)
pad_left = 0 if sum([sum([len(line.text) for line in lines]) for lines in ver_texts]) == 0 else width * 3 // 4 pad_left = 0 if sum([sum([len(line.text) for line in lines]) for lines in ver_texts]) == 0 else width * 3 // 4
cols = im.width // width cols = im.width // width
@ -178,7 +183,7 @@ def draw_grid_annotations(im, width, height, hor_texts, ver_texts, margin=0):
assert cols == len(hor_texts), f'bad number of horizontal texts: {len(hor_texts)}; must be {cols}' assert cols == len(hor_texts), f'bad number of horizontal texts: {len(hor_texts)}; must be {cols}'
assert rows == len(ver_texts), f'bad number of vertical texts: {len(ver_texts)}; must be {rows}' assert rows == len(ver_texts), f'bad number of vertical texts: {len(ver_texts)}; must be {rows}'
calc_img = Image.new("RGB", (1, 1), "white") calc_img = Image.new("RGB", (1, 1), color_background)
calc_d = ImageDraw.Draw(calc_img) calc_d = ImageDraw.Draw(calc_img)
for texts, allowed_width in zip(hor_texts + ver_texts, [width] * len(hor_texts) + [pad_left] * len(ver_texts)): for texts, allowed_width in zip(hor_texts + ver_texts, [width] * len(hor_texts) + [pad_left] * len(ver_texts)):
@ -199,7 +204,7 @@ def draw_grid_annotations(im, width, height, hor_texts, ver_texts, margin=0):
pad_top = 0 if sum(hor_text_heights) == 0 else max(hor_text_heights) + line_spacing * 2 pad_top = 0 if sum(hor_text_heights) == 0 else max(hor_text_heights) + line_spacing * 2
result = Image.new("RGB", (im.width + pad_left + margin * (cols-1), im.height + pad_top + margin * (rows-1)), "white") result = Image.new("RGB", (im.width + pad_left + margin * (cols-1), im.height + pad_top + margin * (rows-1)), color_background)
for row in range(rows): for row in range(rows):
for col in range(cols): for col in range(cols):
@ -301,10 +306,12 @@ def resize_image(resize_mode, im, width, height, upscaler_name=None):
if ratio < src_ratio: if ratio < src_ratio:
fill_height = height // 2 - src_h // 2 fill_height = height // 2 - src_h // 2
if fill_height > 0:
res.paste(resized.resize((width, fill_height), box=(0, 0, width, 0)), box=(0, 0)) res.paste(resized.resize((width, fill_height), box=(0, 0, width, 0)), box=(0, 0))
res.paste(resized.resize((width, fill_height), box=(0, resized.height, width, resized.height)), box=(0, fill_height + src_h)) res.paste(resized.resize((width, fill_height), box=(0, resized.height, width, resized.height)), box=(0, fill_height + src_h))
elif ratio > src_ratio: elif ratio > src_ratio:
fill_width = width // 2 - src_w // 2 fill_width = width // 2 - src_w // 2
if fill_width > 0:
res.paste(resized.resize((fill_width, height), box=(0, 0, 0, height)), box=(0, 0)) res.paste(resized.resize((fill_width, height), box=(0, 0, 0, height)), box=(0, 0))
res.paste(resized.resize((fill_width, height), box=(resized.width, 0, resized.width, height)), box=(fill_width + src_w, 0)) res.paste(resized.resize((fill_width, height), box=(resized.width, 0, resized.width, height)), box=(fill_width + src_w, 0))
@ -318,6 +325,7 @@ re_nonletters = re.compile(r'[\s' + string.punctuation + ']+')
re_pattern = re.compile(r"(.*?)(?:\[([^\[\]]+)\]|$)") re_pattern = re.compile(r"(.*?)(?:\[([^\[\]]+)\]|$)")
re_pattern_arg = re.compile(r"(.*)<([^>]*)>$") re_pattern_arg = re.compile(r"(.*)<([^>]*)>$")
max_filename_part_length = 128 max_filename_part_length = 128
NOTHING_AND_SKIP_PREVIOUS_TEXT = object()
def sanitize_filename_part(text, replace_spaces=True): def sanitize_filename_part(text, replace_spaces=True):
@ -334,8 +342,20 @@ def sanitize_filename_part(text, replace_spaces=True):
class FilenameGenerator: class FilenameGenerator:
def get_vae_filename(self): #get the name of the VAE file.
if sd_vae.loaded_vae_file is None:
return "NoneType"
file_name = os.path.basename(sd_vae.loaded_vae_file)
split_file_name = file_name.split('.')
if len(split_file_name) > 1 and split_file_name[0] == '':
return split_file_name[1] # if the first character of the filename is "." then [1] is obtained.
else:
return split_file_name[0]
replacements = { replacements = {
'seed': lambda self: self.seed if self.seed is not None else '', 'seed': lambda self: self.seed if self.seed is not None else '',
'seed_first': lambda self: self.seed if self.p.batch_size == 1 else self.p.all_seeds[0],
'seed_last': lambda self: NOTHING_AND_SKIP_PREVIOUS_TEXT if self.p.batch_size == 1 else self.p.all_seeds[-1],
'steps': lambda self: self.p and self.p.steps, 'steps': lambda self: self.p and self.p.steps,
'cfg': lambda self: self.p and self.p.cfg_scale, 'cfg': lambda self: self.p and self.p.cfg_scale,
'width': lambda self: self.image.width, 'width': lambda self: self.image.width,
@ -343,7 +363,7 @@ class FilenameGenerator:
'styles': lambda self: self.p and sanitize_filename_part(", ".join([style for style in self.p.styles if not style == "None"]) or "None", replace_spaces=False), 'styles': lambda self: self.p and sanitize_filename_part(", ".join([style for style in self.p.styles if not style == "None"]) or "None", replace_spaces=False),
'sampler': lambda self: self.p and sanitize_filename_part(self.p.sampler_name, replace_spaces=False), 'sampler': lambda self: self.p and sanitize_filename_part(self.p.sampler_name, replace_spaces=False),
'model_hash': lambda self: getattr(self.p, "sd_model_hash", shared.sd_model.sd_model_hash), 'model_hash': lambda self: getattr(self.p, "sd_model_hash", shared.sd_model.sd_model_hash),
'model_name': lambda self: sanitize_filename_part(shared.sd_model.sd_checkpoint_info.model_name, replace_spaces=False), 'model_name': lambda self: sanitize_filename_part(shared.sd_model.sd_checkpoint_info.name_for_extra, replace_spaces=False),
'date': lambda self: datetime.datetime.now().strftime('%Y-%m-%d'), 'date': lambda self: datetime.datetime.now().strftime('%Y-%m-%d'),
'datetime': lambda self, *args: self.datetime(*args), # accepts formats: [datetime], [datetime<Format>], [datetime<Format><Time Zone>] 'datetime': lambda self, *args: self.datetime(*args), # accepts formats: [datetime], [datetime<Format>], [datetime<Format><Time Zone>]
'job_timestamp': lambda self: getattr(self.p, "job_timestamp", shared.state.job_timestamp), 'job_timestamp': lambda self: getattr(self.p, "job_timestamp", shared.state.job_timestamp),
@ -352,14 +372,40 @@ class FilenameGenerator:
'prompt_no_styles': lambda self: self.prompt_no_style(), 'prompt_no_styles': lambda self: self.prompt_no_style(),
'prompt_spaces': lambda self: sanitize_filename_part(self.prompt, replace_spaces=False), 'prompt_spaces': lambda self: sanitize_filename_part(self.prompt, replace_spaces=False),
'prompt_words': lambda self: self.prompt_words(), 'prompt_words': lambda self: self.prompt_words(),
'batch_number': lambda self: NOTHING_AND_SKIP_PREVIOUS_TEXT if self.p.batch_size == 1 or self.zip else self.p.batch_index + 1,
'batch_size': lambda self: self.p.batch_size,
'generation_number': lambda self: NOTHING_AND_SKIP_PREVIOUS_TEXT if (self.p.n_iter == 1 and self.p.batch_size == 1) or self.zip else self.p.iteration * self.p.batch_size + self.p.batch_index + 1,
'hasprompt': lambda self, *args: self.hasprompt(*args), # accepts formats:[hasprompt<prompt1|default><prompt2>..]
'clip_skip': lambda self: opts.data["CLIP_stop_at_last_layers"],
'denoising': lambda self: self.p.denoising_strength if self.p and self.p.denoising_strength else NOTHING_AND_SKIP_PREVIOUS_TEXT,
'user': lambda self: self.p.user,
'vae_filename': lambda self: self.get_vae_filename(),
'none': lambda self: '', # Overrides the default so you can get just the sequence number
} }
default_time_format = '%Y%m%d%H%M%S' default_time_format = '%Y%m%d%H%M%S'
def __init__(self, p, seed, prompt, image): def __init__(self, p, seed, prompt, image, zip=False):
self.p = p self.p = p
self.seed = seed self.seed = seed
self.prompt = prompt self.prompt = prompt
self.image = image self.image = image
self.zip = zip
def hasprompt(self, *args):
lower = self.prompt.lower()
if self.p is None or self.prompt is None:
return None
outres = ""
for arg in args:
if arg != "":
division = arg.split("|")
expected = division[0].lower()
default = division[1] if len(division) > 1 else ""
if lower.find(expected) >= 0:
outres = f'{outres}{expected}'
else:
outres = outres if default == "" else f'{outres}{default}'
return sanitize_filename_part(outres)
def prompt_no_style(self): def prompt_no_style(self):
if self.p is None or self.prompt is None: if self.p is None or self.prompt is None:
@ -367,7 +413,7 @@ class FilenameGenerator:
prompt_no_style = self.prompt prompt_no_style = self.prompt
for style in shared.prompt_styles.get_style_prompts(self.p.styles): for style in shared.prompt_styles.get_style_prompts(self.p.styles):
if len(style) > 0: if style:
for part in style.split("{prompt}"): for part in style.split("{prompt}"):
prompt_no_style = prompt_no_style.replace(part, "").replace(", ,", ",").strip().strip(',') prompt_no_style = prompt_no_style.replace(part, "").replace(", ,", ",").strip().strip(',')
@ -376,7 +422,7 @@ class FilenameGenerator:
return sanitize_filename_part(prompt_no_style, replace_spaces=False) return sanitize_filename_part(prompt_no_style, replace_spaces=False)
def prompt_words(self): def prompt_words(self):
words = [x for x in re_nonletters.split(self.prompt or "") if len(x) > 0] words = [x for x in re_nonletters.split(self.prompt or "") if x]
if len(words) == 0: if len(words) == 0:
words = ["empty"] words = ["empty"]
return sanitize_filename_part(" ".join(words[0:opts.directories_max_prompt_words]), replace_spaces=False) return sanitize_filename_part(" ".join(words[0:opts.directories_max_prompt_words]), replace_spaces=False)
@ -384,16 +430,16 @@ class FilenameGenerator:
def datetime(self, *args): def datetime(self, *args):
time_datetime = datetime.datetime.now() time_datetime = datetime.datetime.now()
time_format = args[0] if len(args) > 0 and args[0] != "" else self.default_time_format time_format = args[0] if (args and args[0] != "") else self.default_time_format
try: try:
time_zone = pytz.timezone(args[1]) if len(args) > 1 else None time_zone = pytz.timezone(args[1]) if len(args) > 1 else None
except pytz.exceptions.UnknownTimeZoneError as _: except pytz.exceptions.UnknownTimeZoneError:
time_zone = None time_zone = None
time_zone_time = time_datetime.astimezone(time_zone) time_zone_time = time_datetime.astimezone(time_zone)
try: try:
formatted_time = time_zone_time.strftime(time_format) formatted_time = time_zone_time.strftime(time_format)
except (ValueError, TypeError) as _: except (ValueError, TypeError):
formatted_time = time_zone_time.strftime(self.default_time_format) formatted_time = time_zone_time.strftime(self.default_time_format)
return sanitize_filename_part(formatted_time, replace_spaces=False) return sanitize_filename_part(formatted_time, replace_spaces=False)
@ -403,9 +449,9 @@ class FilenameGenerator:
for m in re_pattern.finditer(x): for m in re_pattern.finditer(x):
text, pattern = m.groups() text, pattern = m.groups()
res += text
if pattern is None: if pattern is None:
res += text
continue continue
pattern_args = [] pattern_args = []
@ -423,14 +469,15 @@ class FilenameGenerator:
replacement = fun(self, *pattern_args) replacement = fun(self, *pattern_args)
except Exception: except Exception:
replacement = None replacement = None
print(f"Error adding [{pattern}] to filename", file=sys.stderr) errors.report(f"Error adding [{pattern}] to filename", exc_info=True)
print(traceback.format_exc(), file=sys.stderr)
if replacement is not None: if replacement == NOTHING_AND_SKIP_PREVIOUS_TEXT:
res += str(replacement) continue
elif replacement is not None:
res += text + str(replacement)
continue continue
res += f'[{pattern}]' res += f'{text}[{pattern}]'
return res return res
@ -443,20 +490,66 @@ def get_next_sequence_number(path, basename):
""" """
result = -1 result = -1
if basename != '': if basename != '':
basename = basename + "-" basename = f"{basename}-"
prefix_length = len(basename) prefix_length = len(basename)
for p in os.listdir(path): for p in os.listdir(path):
if p.startswith(basename): if p.startswith(basename):
l = os.path.splitext(p[prefix_length:])[0].split('-') # splits the filename (removing the basename first if one is defined, so the sequence number is always the first element) parts = os.path.splitext(p[prefix_length:])[0].split('-') # splits the filename (removing the basename first if one is defined, so the sequence number is always the first element)
try: try:
result = max(int(l[0]), result) result = max(int(parts[0]), result)
except ValueError: except ValueError:
pass pass
return result + 1 return result + 1
def save_image_with_geninfo(image, geninfo, filename, extension=None, existing_pnginfo=None, pnginfo_section_name='parameters'):
"""
Saves image to filename, including geninfo as text information for generation info.
For PNG images, geninfo is added to existing pnginfo dictionary using the pnginfo_section_name argument as key.
For JPG images, there's no dictionary and geninfo just replaces the EXIF description.
"""
if extension is None:
extension = os.path.splitext(filename)[1]
image_format = Image.registered_extensions()[extension]
if extension.lower() == '.png':
existing_pnginfo = existing_pnginfo or {}
if opts.enable_pnginfo:
existing_pnginfo[pnginfo_section_name] = geninfo
if opts.enable_pnginfo:
pnginfo_data = PngImagePlugin.PngInfo()
for k, v in (existing_pnginfo or {}).items():
pnginfo_data.add_text(k, str(v))
else:
pnginfo_data = None
image.save(filename, format=image_format, quality=opts.jpeg_quality, pnginfo=pnginfo_data)
elif extension.lower() in (".jpg", ".jpeg", ".webp"):
if image.mode == 'RGBA':
image = image.convert("RGB")
elif image.mode == 'I;16':
image = image.point(lambda p: p * 0.0038910505836576).convert("RGB" if extension.lower() == ".webp" else "L")
image.save(filename, format=image_format, quality=opts.jpeg_quality, lossless=opts.webp_lossless)
if opts.enable_pnginfo and geninfo is not None:
exif_bytes = piexif.dump({
"Exif": {
piexif.ExifIFD.UserComment: piexif.helper.UserComment.dump(geninfo or "", encoding="unicode")
},
})
piexif.insert(exif_bytes, filename)
else:
image.save(filename, format=image_format, quality=opts.jpeg_quality)
def save_image(image, path, basename, seed=None, prompt=None, extension='png', info=None, short_filename=False, no_prompt=False, grid=False, pnginfo_section_name='parameters', p=None, existing_info=None, forced_filename=None, suffix="", save_to_dirs=None): def save_image(image, path, basename, seed=None, prompt=None, extension='png', info=None, short_filename=False, no_prompt=False, grid=False, pnginfo_section_name='parameters', p=None, existing_info=None, forced_filename=None, suffix="", save_to_dirs=None):
"""Save an image. """Save an image.
@ -509,12 +602,12 @@ def save_image(image, path, basename, seed=None, prompt=None, extension='png', i
else: else:
file_decoration = opts.samples_filename_pattern or "[seed]-[prompt_spaces]" file_decoration = opts.samples_filename_pattern or "[seed]-[prompt_spaces]"
file_decoration = namegen.apply(file_decoration) + suffix
add_number = opts.save_images_add_number or file_decoration == '' add_number = opts.save_images_add_number or file_decoration == ''
if file_decoration != "" and add_number: if file_decoration != "" and add_number:
file_decoration = "-" + file_decoration file_decoration = f"-{file_decoration}"
file_decoration = namegen.apply(file_decoration) + suffix
if add_number: if add_number:
basecount = get_next_sequence_number(path, basename) basecount = get_next_sequence_number(path, basename)
@ -541,38 +634,13 @@ def save_image(image, path, basename, seed=None, prompt=None, extension='png', i
info = params.pnginfo.get(pnginfo_section_name, None) info = params.pnginfo.get(pnginfo_section_name, None)
def _atomically_save_image(image_to_save, filename_without_extension, extension): def _atomically_save_image(image_to_save, filename_without_extension, extension):
# save image with .tmp extension to avoid race condition when another process detects new image in the directory """
temp_file_path = filename_without_extension + ".tmp" save image with .tmp extension to avoid race condition when another process detects new image in the directory
image_format = Image.registered_extensions()[extension] """
temp_file_path = f"{filename_without_extension}.tmp"
if extension.lower() == '.png': save_image_with_geninfo(image_to_save, info, temp_file_path, extension, existing_pnginfo=params.pnginfo, pnginfo_section_name=pnginfo_section_name)
pnginfo_data = PngImagePlugin.PngInfo()
if opts.enable_pnginfo:
for k, v in params.pnginfo.items():
pnginfo_data.add_text(k, str(v))
image_to_save.save(temp_file_path, format=image_format, quality=opts.jpeg_quality, pnginfo=pnginfo_data)
elif extension.lower() in (".jpg", ".jpeg", ".webp"):
if image_to_save.mode == 'RGBA':
image_to_save = image_to_save.convert("RGB")
elif image_to_save.mode == 'I;16':
image_to_save = image_to_save.point(lambda p: p * 0.0038910505836576).convert("RGB" if extension.lower() == ".webp" else "L")
image_to_save.save(temp_file_path, format=image_format, quality=opts.jpeg_quality, lossless=opts.webp_lossless)
if opts.enable_pnginfo and info is not None:
exif_bytes = piexif.dump({
"Exif": {
piexif.ExifIFD.UserComment: piexif.helper.UserComment.dump(info or "", encoding="unicode")
},
})
piexif.insert(exif_bytes, temp_file_path)
else:
image_to_save.save(temp_file_path, format=image_format, quality=opts.jpeg_quality)
# atomically rename the file with correct extension
os.replace(temp_file_path, filename_without_extension + extension) os.replace(temp_file_path, filename_without_extension + extension)
fullfn_without_extension, extension = os.path.splitext(params.filename) fullfn_without_extension, extension = os.path.splitext(params.filename)
@ -588,12 +656,18 @@ def save_image(image, path, basename, seed=None, prompt=None, extension='png', i
oversize = image.width > opts.target_side_length or image.height > opts.target_side_length oversize = image.width > opts.target_side_length or image.height > opts.target_side_length
if opts.export_for_4chan and (oversize or os.stat(fullfn).st_size > opts.img_downscale_threshold * 1024 * 1024): if opts.export_for_4chan and (oversize or os.stat(fullfn).st_size > opts.img_downscale_threshold * 1024 * 1024):
ratio = image.width / image.height ratio = image.width / image.height
resize_to = None
if oversize and ratio > 1: if oversize and ratio > 1:
image = image.resize((round(opts.target_side_length), round(image.height * opts.target_side_length / image.width)), LANCZOS) resize_to = round(opts.target_side_length), round(image.height * opts.target_side_length / image.width)
elif oversize: elif oversize:
image = image.resize((round(image.width * opts.target_side_length / image.height), round(opts.target_side_length)), LANCZOS) resize_to = round(image.width * opts.target_side_length / image.height), round(opts.target_side_length)
if resize_to is not None:
try:
# Resizing image with LANCZOS could throw an exception if e.g. image mode is I;16
image = image.resize(resize_to, LANCZOS)
except Exception:
image = image.resize(resize_to)
try: try:
_atomically_save_image(image, fullfn_without_extension, ".jpg") _atomically_save_image(image, fullfn_without_extension, ".jpg")
except Exception as e: except Exception as e:
@ -602,7 +676,7 @@ def save_image(image, path, basename, seed=None, prompt=None, extension='png', i
if opts.save_txt and info is not None: if opts.save_txt and info is not None:
txt_fullfn = f"{fullfn_without_extension}.txt" txt_fullfn = f"{fullfn_without_extension}.txt"
with open(txt_fullfn, "w", encoding="utf8") as file: with open(txt_fullfn, "w", encoding="utf8") as file:
file.write(info + "\n") file.write(f"{info}\n")
else: else:
txt_fullfn = None txt_fullfn = None
@ -611,8 +685,15 @@ def save_image(image, path, basename, seed=None, prompt=None, extension='png', i
return fullfn, txt_fullfn return fullfn, txt_fullfn
def read_info_from_image(image): IGNORED_INFO_KEYS = {
items = image.info or {} 'jfif', 'jfif_version', 'jfif_unit', 'jfif_density', 'dpi', 'exif',
'loop', 'background', 'timestamp', 'duration', 'progressive', 'progression',
'icc_profile', 'chromaticity', 'photoshop',
}
def read_info_from_image(image: Image.Image) -> tuple[str | None, dict]:
items = (image.info or {}).copy()
geninfo = items.pop('parameters', None) geninfo = items.pop('parameters', None)
@ -628,8 +709,7 @@ def read_info_from_image(image):
items['exif comment'] = exif_comment items['exif comment'] = exif_comment
geninfo = exif_comment geninfo = exif_comment
for field in ['jfif', 'jfif_version', 'jfif_unit', 'jfif_density', 'dpi', 'exif', for field in IGNORED_INFO_KEYS:
'loop', 'background', 'timestamp', 'duration']:
items.pop(field, None) items.pop(field, None)
if items.get("Software", None) == "NovelAI": if items.get("Software", None) == "NovelAI":
@ -641,8 +721,7 @@ def read_info_from_image(image):
Negative prompt: {json_info["uc"]} Negative prompt: {json_info["uc"]}
Steps: {json_info["steps"]}, Sampler: {sampler}, CFG scale: {json_info["scale"]}, Seed: {json_info["seed"]}, Size: {image.width}x{image.height}, Clip skip: 2, ENSD: 31337""" Steps: {json_info["steps"]}, Sampler: {sampler}, CFG scale: {json_info["scale"]}, Seed: {json_info["seed"]}, Size: {image.width}x{image.height}, Clip skip: 2, ENSD: 31337"""
except Exception: except Exception:
print("Error parsing NovelAI image generation parameters:", file=sys.stderr) errors.report("Error parsing NovelAI image generation parameters", exc_info=True)
print(traceback.format_exc(), file=sys.stderr)
return geninfo, items return geninfo, items

View File

@ -1,31 +1,32 @@
import math
import os import os
import sys from contextlib import closing
import traceback from pathlib import Path
import numpy as np import numpy as np
from PIL import Image, ImageOps, ImageFilter, ImageEnhance, ImageChops from PIL import Image, ImageOps, ImageFilter, ImageEnhance, ImageChops, UnidentifiedImageError
import gradio as gr
from modules import devices, sd_samplers from modules import sd_samplers, images as imgutil
from modules.generation_parameters_copypaste import create_override_settings_dict from modules.generation_parameters_copypaste import create_override_settings_dict, parse_generation_parameters
from modules.processing import Processed, StableDiffusionProcessingImg2Img, process_images from modules.processing import Processed, StableDiffusionProcessingImg2Img, process_images
from modules.shared import opts, state from modules.shared import opts, state
from modules.images import save_image
import modules.shared as shared import modules.shared as shared
import modules.processing as processing import modules.processing as processing
from modules.ui import plaintext_to_html from modules.ui import plaintext_to_html
import modules.images as images
import modules.scripts import modules.scripts
def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args): def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=False, scale_by=1.0, use_png_info=False, png_info_props=None, png_info_dir=None):
processing.fix_seed(p) processing.fix_seed(p)
images = shared.listfiles(input_dir) images = list(shared.walk_files(input_dir, allowed_extensions=(".png", ".jpg", ".jpeg", ".webp")))
is_inpaint_batch = False is_inpaint_batch = False
if inpaint_mask_dir: if inpaint_mask_dir:
inpaint_masks = shared.listfiles(inpaint_mask_dir) inpaint_masks = shared.listfiles(inpaint_mask_dir)
is_inpaint_batch = len(inpaint_masks) > 0 is_inpaint_batch = bool(inpaint_masks)
if is_inpaint_batch: if is_inpaint_batch:
print(f"\nInpaint batch is enabled. {len(inpaint_masks)} masks found.") print(f"\nInpaint batch is enabled. {len(inpaint_masks)} masks found.")
@ -38,6 +39,14 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args):
state.job_count = len(images) * p.n_iter state.job_count = len(images) * p.n_iter
# extract "default" params to use in case getting png info fails
prompt = p.prompt
negative_prompt = p.negative_prompt
seed = p.seed
cfg_scale = p.cfg_scale
sampler_name = p.sampler_name
steps = p.steps
for i, image in enumerate(images): for i, image in enumerate(images):
state.job = f"{i+1} out of {len(images)}" state.job = f"{i+1} out of {len(images)}"
if state.skipped: if state.skipped:
@ -46,39 +55,80 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args):
if state.interrupted: if state.interrupted:
break break
try:
img = Image.open(image) img = Image.open(image)
except UnidentifiedImageError as e:
print(e)
continue
# Use the EXIF orientation of photos taken by smartphones. # Use the EXIF orientation of photos taken by smartphones.
img = ImageOps.exif_transpose(img) img = ImageOps.exif_transpose(img)
if to_scale:
p.width = int(img.width * scale_by)
p.height = int(img.height * scale_by)
p.init_images = [img] * p.batch_size p.init_images = [img] * p.batch_size
image_path = Path(image)
if is_inpaint_batch: if is_inpaint_batch:
# try to find corresponding mask for an image using simple filename matching # try to find corresponding mask for an image using simple filename matching
mask_image_path = os.path.join(inpaint_mask_dir, os.path.basename(image)) if len(inpaint_masks) == 1:
# if not found use first one ("same mask for all images" use-case)
if not mask_image_path in inpaint_masks:
mask_image_path = inpaint_masks[0] mask_image_path = inpaint_masks[0]
else:
# try to find corresponding mask for an image using simple filename matching
mask_image_dir = Path(inpaint_mask_dir)
masks_found = list(mask_image_dir.glob(f"{image_path.stem}.*"))
if len(masks_found) == 0:
print(f"Warning: mask is not found for {image_path} in {mask_image_dir}. Skipping it.")
continue
# it should contain only 1 matching mask
# otherwise user has many masks with the same name but different extensions
mask_image_path = masks_found[0]
mask_image = Image.open(mask_image_path) mask_image = Image.open(mask_image_path)
p.image_mask = mask_image p.image_mask = mask_image
if use_png_info:
try:
info_img = img
if png_info_dir:
info_img_path = os.path.join(png_info_dir, os.path.basename(image))
info_img = Image.open(info_img_path)
geninfo, _ = imgutil.read_info_from_image(info_img)
parsed_parameters = parse_generation_parameters(geninfo)
parsed_parameters = {k: v for k, v in parsed_parameters.items() if k in (png_info_props or {})}
except Exception:
parsed_parameters = {}
p.prompt = prompt + (" " + parsed_parameters["Prompt"] if "Prompt" in parsed_parameters else "")
p.negative_prompt = negative_prompt + (" " + parsed_parameters["Negative prompt"] if "Negative prompt" in parsed_parameters else "")
p.seed = int(parsed_parameters.get("Seed", seed))
p.cfg_scale = float(parsed_parameters.get("CFG scale", cfg_scale))
p.sampler_name = parsed_parameters.get("Sampler", sampler_name)
p.steps = int(parsed_parameters.get("Steps", steps))
proc = modules.scripts.scripts_img2img.run(p, *args) proc = modules.scripts.scripts_img2img.run(p, *args)
if proc is None: if proc is None:
proc = process_images(p) proc = process_images(p)
for n, processed_image in enumerate(proc.images): for n, processed_image in enumerate(proc.images):
filename = os.path.basename(image) filename = image_path.stem
infotext = proc.infotext(p, n)
relpath = os.path.dirname(os.path.relpath(image, input_dir))
if n > 0: if n > 0:
left, right = os.path.splitext(filename) filename += f"-{n}"
filename = f"{left}-{n}{right}"
if not save_normally: if not save_normally:
os.makedirs(output_dir, exist_ok=True) os.makedirs(os.path.join(output_dir, relpath), exist_ok=True)
if processed_image.mode == 'RGBA': if processed_image.mode == 'RGBA':
processed_image = processed_image.convert("RGB") processed_image = processed_image.convert("RGB")
processed_image.save(os.path.join(output_dir, filename)) save_image(processed_image, os.path.join(output_dir, relpath), None, extension=opts.samples_format, info=infotext, forced_filename=filename, save_to_dirs=False)
def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_styles, init_img, sketch, init_img_with_mask, inpaint_color_sketch, inpaint_color_sketch_orig, init_img_inpaint, init_mask_inpaint, steps: int, sampler_index: int, mask_blur: int, mask_alpha: float, inpainting_fill: int, restore_faces: bool, tiling: bool, n_iter: int, batch_size: int, cfg_scale: float, image_cfg_scale: float, denoising_strength: float, seed: int, subseed: int, subseed_strength: float, seed_resize_from_h: int, seed_resize_from_w: int, seed_enable_extras: bool, height: int, width: int, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, img2img_batch_inpaint_mask_dir: str, override_settings_texts, *args): def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_styles, init_img, sketch, init_img_with_mask, inpaint_color_sketch, inpaint_color_sketch_orig, init_img_inpaint, init_mask_inpaint, steps: int, sampler_index: int, mask_blur: int, mask_alpha: float, inpainting_fill: int, restore_faces: bool, tiling: bool, n_iter: int, batch_size: int, cfg_scale: float, image_cfg_scale: float, denoising_strength: float, seed: int, subseed: int, subseed_strength: float, seed_resize_from_h: int, seed_resize_from_w: int, seed_enable_extras: bool, selected_scale_tab: int, height: int, width: int, scale_by: float, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, img2img_batch_inpaint_mask_dir: str, override_settings_texts, img2img_batch_use_png_info: bool, img2img_batch_png_info_props: list, img2img_batch_png_info_dir: str, request: gr.Request, *args):
override_settings = create_override_settings_dict(override_settings_texts) override_settings = create_override_settings_dict(override_settings_texts)
is_batch = mode == 5 is_batch = mode == 5
@ -92,7 +142,8 @@ def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_s
elif mode == 2: # inpaint elif mode == 2: # inpaint
image, mask = init_img_with_mask["image"], init_img_with_mask["mask"] image, mask = init_img_with_mask["image"], init_img_with_mask["mask"]
alpha_mask = ImageOps.invert(image.split()[-1]).convert('L').point(lambda x: 255 if x > 0 else 0, mode='1') alpha_mask = ImageOps.invert(image.split()[-1]).convert('L').point(lambda x: 255 if x > 0 else 0, mode='1')
mask = ImageChops.lighter(alpha_mask, mask.convert('L')).convert('L') mask = mask.convert('L').point(lambda x: 255 if x > 128 else 0, mode='1')
mask = ImageChops.lighter(alpha_mask, mask).convert('L')
image = image.convert("RGB") image = image.convert("RGB")
elif mode == 3: # inpaint sketch elif mode == 3: # inpaint sketch
image = inpaint_color_sketch image = inpaint_color_sketch
@ -114,6 +165,12 @@ def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_s
if image is not None: if image is not None:
image = ImageOps.exif_transpose(image) image = ImageOps.exif_transpose(image)
if selected_scale_tab == 1 and not is_batch:
assert image, "Can't scale by because no image is selected"
width = int(image.width * scale_by)
height = int(image.height * scale_by)
assert 0. <= denoising_strength <= 1., 'can only work with strength in [0.0, 1.0]' assert 0. <= denoising_strength <= 1., 'can only work with strength in [0.0, 1.0]'
p = StableDiffusionProcessingImg2Img( p = StableDiffusionProcessingImg2Img(
@ -151,19 +208,22 @@ def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_s
override_settings=override_settings, override_settings=override_settings,
) )
p.scripts = modules.scripts.scripts_txt2img p.scripts = modules.scripts.scripts_img2img
p.script_args = args p.script_args = args
p.user = request.username
if shared.cmd_opts.enable_console_prompts: if shared.cmd_opts.enable_console_prompts:
print(f"\nimg2img: {prompt}", file=shared.progress_print_out) print(f"\nimg2img: {prompt}", file=shared.progress_print_out)
if mask: if mask:
p.extra_generation_params["Mask blur"] = mask_blur p.extra_generation_params["Mask blur"] = mask_blur
with closing(p):
if is_batch: if is_batch:
assert not shared.cmd_opts.hide_ui_dir_config, "Launched with --hide-ui-dir-config, batch img2img disabled" assert not shared.cmd_opts.hide_ui_dir_config, "Launched with --hide-ui-dir-config, batch img2img disabled"
process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, img2img_batch_inpaint_mask_dir, args) process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, img2img_batch_inpaint_mask_dir, args, to_scale=selected_scale_tab == 1, scale_by=scale_by, use_png_info=img2img_batch_use_png_info, png_info_props=img2img_batch_png_info_props, png_info_dir=img2img_batch_png_info_dir)
processed = Processed(p, [], p.seed, "") processed = Processed(p, [], p.seed, "")
else: else:
@ -171,8 +231,6 @@ def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_s
if processed is None: if processed is None:
processed = process_images(p) processed = process_images(p)
p.close()
shared.total_tqdm.clear() shared.total_tqdm.clear()
generation_info_js = processed.js() generation_info_js = processed.js()
@ -182,4 +240,4 @@ def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_s
if opts.do_not_show_images: if opts.do_not_show_images:
processed.images = [] processed.images = []
return processed.images, generation_info_js, plaintext_to_html(processed.info), plaintext_to_html(processed.comments) return processed.images, generation_info_js, plaintext_to_html(processed.info), plaintext_to_html(processed.comments, classname="comments")

Some files were not shown because too many files have changed in this diff Show More