File:Demonstration Of Inpainting And Outpainting Using Stable Diffusion (step 4 Of 4).png
This image aims to illustrate the process in which Stable Diffusion can be used to perform both inpainting and outpainting, as one part out of four images showing each step of the procedure.
- Procedure/Methodology
All artworks created using a single NVIDIA RTX 3090. Front-end used for the entire generation process is Stable Diffusion web UI created by AUTOMATIC1111.
- First image: Generation via text prompt
An initial 512x768 image was algorithmically-generated with Stable Diffusion via txt2img using the following prompts:
Prompt: busty young girl, art style of artgerm and greg rutkowski
Negative prompt: (((deformed))), [blurry], bad anatomy, disfigured, poorly drawn face, mutation, mutated, (extra_limb), (ugly), (poorly drawn hands), messy drawing, two heads, four breasts
Settings: Steps: 50, Sampler: Euler a, CFG scale: 7, Seed: 4027103558, Size: 512x768
Then, two passes of the SD upscale script using "Real-ESRGAN 4x plus anime 6B" were run within img2img. The first pass used a tile overlap of 64, denoising strength of 0.3, 50 sampling steps with Euler a, and a CFG scale of 7. The second pass used a tile overlap of 128, denoising strength of 0.1, 10 sampling steps with Euler a, and a CFG scale of 7. This creates our initial 2048x3072 image to begin working with. Unfortunately for her (and fortunately for the purpose of this demonstration), it appears that the AI neglected to give this woman one of her arms.
- Second image: Outpainting
Using the "Outpainting mk2" script within img2img, the bottom of the image was extended by 512 pixels (via two passes, each pass extending 256 pixels), using 100 sampling steps with Euler a, denoising strength of 0.8, CFG scale of 7.5, mask blur of 4, fall-off exponent value of 1.8, colour variation set to 0.03. The prompts used were identical to those utilised during the first step. This subsequently increases the image's dimensions to 2048x3584, while also revealing the woman's midriff, belly button and skirt, which were previously absent from the original AI-generated image.
- Third image: Preparation for inpainting
In GIMP, I drew a very shoddy attempt at a human arm using the standard paintbrush. This will provide a guide for the AI model to generate a new arm.
- Final image: Inpainting
Using the inpaint feature for img2img, I drew a mask over the arm drawn in the previous step, along with a portion of the shoulder. The following settings were used for all passes:
- Inpaint masked
- Masked content: original
- Inpaint at full resolution, padding at 256 pixels
- Steps: 80, Sampler: Euler a
An initial pass was run using the following prompts:
Prompt: perfect arm, young woman's arm, (((anterior elbow))), (((inside of elbow))), bent arm, slender arm, realistic arm, wrinkled short sleeve of white blouse, woman's shoulder, brown hair on top of sleeve, (((pale skin))), skin on arm, smooth skin, art style of artgerm and greg rutkowski
Negative prompt: (((torn blouse))), (((torn sleeve))), (((deformed))), [blurry], bad anatomy, disfigured, multiple arms, mutation, mutated, (extra_limb), (ugly), (poorly drawn hands), messy drawing
Settings: CFG scale: 17, Denoising strength: 0.6, Seed: 525737653
This created the arm; another subsequent pass was then done to fine-tune deformations and blemishes around the newly generated arm along the sleeve. Drawing a new mask over the shoulder, the following prompt was used:
Prompt: brown hair on top of sleeve and arm, wrinkled short sleeve of white blouse, young woman's upper arm beside her chest, woman's shoulder, skin under sleeve, art style of artgerm and greg rutkowski
Negative prompt: (((deformed))), [blurry], bad anatomy, disfigured, multiple arms, mutation, mutated, (extra_limb), (ugly), (poorly drawn hands), messy drawing
Settings: CFG scale: 7, Denoising strength: 0.4, Seed: 653575127
The outcome of this pass resulted in the final image.
(Reusing this file)
- Output images
As the creator of the output images, I release this image under the licence displayed within the template below.
- Stable Diffusion AI model
The Stable Diffusion AI model is released under the CreativeML OpenRAIL-M License, which "does not impose any restrictions on reuse, distribution, commercialization, adaptation" as long as the model is not being intentionally used to cause harm to individuals, for instance, to deliberately mislead or deceive, and the authors of the AI models claim no rights over any image outputs generated, as stipulated by the license.
- Addendum on datasets used to teach AI neural networks
Licensing
- You are free:
- to share – to copy, distribute and transmit the work
- to remix – to adapt the work
- Under the following conditions:
- attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- share alike – If you remix, transform, or build upon the material, you must distribute your contributions under the same or compatible license as the original.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled GNU Free Documentation License. |