AUTOMATIC
f9ac3352cb
change hypernets to use sha256 hashes
2023-01-14 10:25:37 +03:00
AUTOMATIC
a95f135308
change hash to sha256
2023-01-14 09:56:59 +03:00
AUTOMATIC1111
9cd7716753
Merge branch 'master' into tensorboard
2023-01-13 14:57:38 +03:00
Vladimir Mandic
3f43d8a966
set descriptions
2023-01-11 10:28:55 -05:00
aria1th
a4a5475cfa
Variable dropout rate
...
Implements variable dropout rate from #4549
Fixes hypernetwork multiplier being able to modified during training, also fixes user-errors by setting multiplier value to lower values for training.
Changes function name to match torch.nn.module standard
Fixes RNG reset issue when generating previews by restoring RNG state
2023-01-10 14:56:57 +09:00
AUTOMATIC
1fbb6f9ebe
make a dropdown for prompt template selection
2023-01-09 23:35:40 +03:00
dan
72497895b9
Move batchsize check
2023-01-08 02:57:36 +08:00
dan
669fb18d52
Add checkbox for variable training dims
2023-01-08 02:31:40 +08:00
AUTOMATIC
683287d87f
rework saving training params to file #6372
2023-01-06 08:52:06 +03:00
timntorres
b6bab2f052
Include model in log file. Exclude directory.
2023-01-05 09:14:56 -08:00
timntorres
b85c2b5cf4
Clean up ti, add same behavior to hypernetwork.
2023-01-05 08:14:38 -08:00
AUTOMATIC1111
eeb1de4388
Merge branch 'master' into gradient-clipping
2023-01-04 19:56:35 +03:00
Vladimir Mandic
192ddc04d6
add job info to modules
2023-01-03 10:34:51 -05:00
AUTOMATIC1111
b12de850ae
Merge pull request #5992 from yuvalabou/F541
...
Fix F541: f-string without any placeholders
2022-12-25 09:16:08 +03:00
Vladimir Mandic
5f1dfbbc95
implement train api
2022-12-24 18:02:22 -05:00
Yuval Aboulafia
3bf5591efe
fix F541 f-string without any placeholders
2022-12-24 21:35:29 +02:00
AUTOMATIC1111
c9a2cfdf2a
Merge branch 'master' into racecond_fix
2022-12-03 10:19:51 +03:00
brkirch
4d5f1691dd
Use devices.autocast instead of torch.autocast
2022-11-30 10:33:42 -05:00
flamelaw
1bd57cc979
last_layer_dropout default to False
2022-11-23 20:21:52 +09:00
flamelaw
d2c97fc3fe
fix dropout, implement train/eval mode
2022-11-23 20:00:00 +09:00
flamelaw
89d8ecff09
small fixes
2022-11-23 02:49:01 +09:00
flamelaw
5b57f61ba4
fix pin_memory with different latent sampling method
2022-11-21 10:15:46 +09:00
flamelaw
bd68e35de3
Gradient accumulation, autocast fix, new latent sampling method, etc
2022-11-20 12:35:26 +09:00
AUTOMATIC
cdc8020d13
change StableDiffusionProcessing to internally use sampler name instead of sampler index
2022-11-19 12:01:51 +03:00
Muhammad Rizqi Nur
cabd4e3b3b
Merge branch 'master' into gradient-clipping
2022-11-07 22:43:38 +07:00
AUTOMATIC
62e3d71aa7
rework the code to not use the walrus operator because colab's 3.7 does not support it
2022-11-05 17:09:42 +03:00
AUTOMATIC1111
cb84a304f0
Merge pull request #4273 from Omegastick/ordered_hypernetworks
...
Sort hypernetworks list
2022-11-05 16:16:18 +03:00
Muhammad Rizqi Nur
bb832d7725
Simplify grad clip
2022-11-05 11:48:38 +07:00
Isaac Poulton
08feb4c364
Sort straight out of the glob
2022-11-04 20:53:11 +07:00
Muhammad Rizqi Nur
3277f90e93
Merge branch 'master' into gradient-clipping
2022-11-04 18:47:28 +07:00
Isaac Poulton
fd62727893
Sort hypernetworks
2022-11-04 18:34:35 +07:00
Fampai
39541d7725
Fixes race condition in training when VAE is unloaded
...
set_current_image can attempt to use the VAE when it is unloaded to
the CPU while training
2022-11-04 04:50:22 -04:00
aria1th
1ca0bcd3a7
only save if option is enabled
2022-11-04 16:09:19 +09:00
aria1th
f5d394214d
split before declaring file name
2022-11-04 16:04:03 +09:00
aria1th
283249d239
apply
2022-11-04 15:57:17 +09:00
AUTOMATIC1111
4918eb6ce4
Merge branch 'master' into hn-activation
2022-11-04 09:02:15 +03:00
Muhammad Rizqi Nur
d5ea878b2a
Fix merge conflicts
2022-10-31 13:54:40 +07:00
Muhammad Rizqi Nur
4123be632a
Fix merge conflicts
2022-10-31 13:53:22 +07:00
Muhammad Rizqi Nur
cd4d59c0de
Merge master
2022-10-30 18:57:51 +07:00
AUTOMATIC1111
17a2076f72
Merge pull request #3928 from R-N/validate-before-load
...
Optimize training a little
2022-10-30 09:51:36 +03:00
Muhammad Rizqi Nur
3d58510f21
Fix dataset still being loaded even when training will be skipped
2022-10-30 00:54:59 +07:00
Muhammad Rizqi Nur
a07f054c86
Add missing info on hypernetwork/embedding model log
...
Mentioned here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/1528#discussioncomment-3991513
Also group the saving into one
2022-10-30 00:49:29 +07:00
Muhammad Rizqi Nur
ab05a74ead
Revert "Add cleanup after training"
...
This reverts commit 3ce2bfdf95
.
2022-10-30 00:32:02 +07:00
Muhammad Rizqi Nur
3ce2bfdf95
Add cleanup after training
2022-10-29 19:43:21 +07:00
Muhammad Rizqi Nur
ab27c111d0
Add input validations before loading dataset for training
2022-10-29 18:09:17 +07:00
Muhammad Rizqi Nur
05e2e40537
Merge branch 'master' into gradient-clipping
2022-10-29 15:04:21 +07:00
timntorres
e98f72be33
Merge branch 'AUTOMATIC1111:master' into 3825-save-hypernet-strength-to-info
2022-10-29 00:31:23 -07:00
AUTOMATIC1111
810e6a407d
Merge pull request #3858 from R-N/log-csv
...
Fix log off by 1 #3847
2022-10-29 07:55:20 +03:00
Muhammad Rizqi Nur
9ceef81f77
Fix log off by 1
2022-10-28 20:48:08 +07:00
Muhammad Rizqi Nur
16451ca573
Learning rate sched syntax support for grad clipping
2022-10-28 17:16:23 +07:00