Commit Graph

204 Commits

Author SHA1 Message Date
AUTOMATIC
ee10c41e2a Merge remote-tracking branch 'origin/steve3d' 2022-10-12 08:35:52 +03:00
AUTOMATIC1111
2e2d45b281
Merge pull request #2143 from JC-Array/deepdanbooru_pre_process
deepbooru tags for textual inversion preproccessing
2022-10-12 08:35:27 +03:00
AUTOMATIC
6ac2ec2b78 create dir for hypernetworks 2022-10-12 07:01:20 +03:00
supersteve3d
65b973ac4e
Update shared.py
Correct typo to "Unload VAE and CLIP from VRAM when training" in settings tab.
2022-10-12 08:21:52 +08:00
JC_Array
f53f703aeb resolved conflicts, moved settings under interrogate section, settings only show if deepbooru flag is enabled 2022-10-11 18:12:12 -05:00
JC-Array
963d986396
Merge branch 'AUTOMATIC1111:master' into deepdanbooru_pre_process 2022-10-11 17:33:15 -05:00
AUTOMATIC
d4ea5f4d86 add an option to unload models during hypernetwork training to save VRAM 2022-10-11 19:03:08 +03:00
brkirch
c0484f1b98 Add cross-attention optimization from InvokeAI
* Add cross-attention optimization from InvokeAI (~30% speed improvement on MPS)
* Add command line option for it
* Make it default when CUDA is unavailable
2022-10-11 17:24:00 +03:00
AUTOMATIC
873efeed49 rename hypernetwork dir to hypernetworks to prevent clash with an old filename that people who use zip instead of git clone will have 2022-10-11 15:51:30 +03:00
JamnedZ
5992564448 Cleaned ngrok integration 2022-10-11 15:38:53 +03:00
AUTOMATIC
530103b586 fixes related to merge 2022-10-11 14:53:02 +03:00
hentailord85ez
5e2627a1a6
Comma backtrack padding (#2192)
Comma backtrack padding
2022-10-11 09:55:28 +03:00
Kenneth
8617396c6d Added slider for deepbooru score threshold in settings 2022-10-11 09:43:16 +03:00
JC-Array
47f5e216da
Merge branch 'deepdanbooru_pre_process' into master 2022-10-10 18:10:49 -05:00
JC_Array
76ef3d75f6 added deepbooru settings (threshold and sort by alpha or likelyhood) 2022-10-10 18:01:49 -05:00
AUTOMATIC
f98338faa8 add an option to not add watermark to created images 2022-10-10 23:15:48 +03:00
AUTOMATIC1111
b3d3b335cf
Merge pull request #2131 from ssysm/upstream-master
Add VAE Path Arguments
2022-10-10 20:45:14 +03:00
AUTOMATIC
39919c40dd add eta noise seed delta option 2022-10-10 20:32:44 +03:00
AUTOMATIC
7349088d32 --no-half-vae 2022-10-10 16:16:29 +03:00
ssysm
6fdad291bd Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into upstream-master 2022-10-09 23:20:39 -04:00
ssysm
cc92dc1f8d add vae path args 2022-10-09 23:17:29 -04:00
Fampai
a14f7bf113 Corrected CLIP Layer Ignore description and updated its range to the max possible 2022-10-09 22:31:23 +03:00
AUTOMATIC
6c383d2e82 show model selection setting on top of page 2022-10-09 22:24:07 +03:00
AUTOMATIC
875ddfeecf added guard for torch.load to prevent loading pickles with unknown content 2022-10-09 17:58:43 +03:00
AUTOMATIC
e6e8cabe0c change up #2056 to make it work how i want it to plus make xy plot write correct values to images 2022-10-09 14:57:48 +03:00
William Moorehouse
d6d10a37bf Added extended model details to infotext 2022-10-09 14:49:15 +03:00
Nicolas Noullet
1ffeb42d38 Fix typo 2022-10-09 11:10:13 +03:00
Fampai
122d42687b Fix VRAM Issue by only loading in hypernetwork when selected in settings 2022-10-09 11:08:11 +03:00
AUTOMATIC1111
e00b4df7c6
Merge pull request #1752 from Greendayle/dev/deepdanbooru
Added DeepDanbooru interrogator
2022-10-09 10:52:21 +03:00
Aidan Holland
432782163a chore: Fix typos 2022-10-08 22:42:30 +03:00
Fampai
1371d7608b Added ability to ignore last n layers in FrozenCLIPEmbedder 2022-10-08 22:10:37 +03:00
Greendayle
0ec80f0125
Merge branch 'master' into dev/deepdanbooru 2022-10-08 18:28:22 +02:00
AUTOMATIC
3061cdb7b6 add --force-enable-xformers option and also add messages to console regarding cross attention optimizations 2022-10-08 19:22:15 +03:00
Greendayle
01f8cb4447 made deepdanbooru optional, added to readme, automatic download of deepbooru model 2022-10-08 18:02:56 +02:00
AUTOMATIC
dc1117233e simplify xfrmers options: --xformers to enable and that's it 2022-10-08 17:02:18 +03:00
AUTOMATIC1111
48feae37ff
Merge pull request #1851 from C43H66N12O12S2/flash
xformers attention
2022-10-08 16:29:59 +03:00
C43H66N12O12S2
ddfa9a9786
add xformers_available shared variable 2022-10-08 16:20:41 +03:00
AUTOMATIC
4999eb2ef9 do not let user choose his own prompt token count limit 2022-10-08 14:25:47 +03:00
Trung Ngo
00117a07ef check specifically for skipped 2022-10-08 13:40:39 +03:00
Trung Ngo
786d9f63aa Add button to skip the current iteration 2022-10-08 13:40:39 +03:00
AUTOMATIC
706d5944a0 let user choose his own prompt token count limit 2022-10-08 13:38:57 +03:00
AUTOMATIC
bad7cb29ce added support for hypernetworks (???) 2022-10-07 10:17:52 +03:00
C43H66N12O12S2
da4ab2707b
Update shared.py 2022-10-07 05:23:06 +03:00
Milly
cf7c784fcc Removed duplicate defined models_path
Use `modules.paths.models_path` instead `modules.shared.model_path`.
2022-10-06 20:29:12 +03:00
DepFA
fec71e4de2 Default window title progress updates on 2022-10-06 17:58:52 +03:00
DepFA
be71115b1a Update shared.py 2022-10-06 17:58:52 +03:00
AUTOMATIC
5993df24a1 integrate the new samplers PR 2022-10-06 14:12:52 +03:00
C43H66N12O12S2
3ddf80a9db add variant setting 2022-10-06 13:42:21 +03:00
AUTOMATIC
5f24b7bcf4 option to let users select which samplers they want to hide 2022-10-06 12:08:59 +03:00
DepFA
55400c981b Set gradio-img2img-tool default to 'editor' 2022-10-06 08:46:32 +03:00