When user using model_name.png as a preview image, textural_inversion.py still treat it as an embeding, and didn't handle its error, just let python throw out an None type error like following:
```bash
File "D:\Work\Dev\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 155, in load_from_file
name = data.get('name', name)
AttributeError: 'NoneType' object has no attribute 'get'
```
With just a simple `if data:` checking as following, there will be no error, breaks nothing, and now this module can works fine with user's preview images.
Old code:
```python
data = extract_image_data_embed(embed_image)
name = data.get('name', name)
```
New code:
```python
data = extract_image_data_embed(embed_image)
if data:
name = data.get('name', name)
else:
# if data is None, means this is not an embeding, just a preview image
return
```
Also, since there is no more errors on textual inversion module, from now on, extra network can set "model_name.png" as preview image for embedings.
On PyTorch 2.0, with MPS layer_norm only accepts float32 inputs. This was fixed shortly after 2.0 was finalized so the workaround can be applied with an exact version match.
The loopback script did not set masked content to original after first loop. So each loop would apply a fill, or latent mask. This would essentially reset progress each loop.
The desired behavior is to use the mask for the first loop, then continue to iterate on the results of the previous loop.
Why?
one of the internal calls of `load_file_from_url` import cv2, which locks the cv2 site-package, which extensions may (and in our case, is) breaking the installation of some libraries. The base project should be limiting its import of unnecessary libraries when possible during the installation phase.
- Improved user experience. You can now pick the denoising strength of the final loop and one of three curves. Previously you picked a multiplier such as 0.98 or 1.03 to define the change to the denoising strength for each loop. You had to do a ton of math in your head to visualize what was happening. The new UX makes it very easy to understand what's going on and tweak.
- For batch sizes over 1, intermediate images no longer returned. For a batch size of 1, intermediate images from each loop will continue to be returned. When more than 1 image is returned, a grid will also be generated. Previously for larger jobs, you'd get back a mess of many grids and potentially hundreds of images with no organization. To make large jobs usable, only final images are returned.
- Added support for skipping current image. Fixed interrupt to cleanly end and return images. Previously these would throw.
- Improved tooltip descriptions
- Fix some edge cases