Midjourney has updated its popular neural network for image generation by adding Inpainting (Vary Region). Users can now modify specific regions of images to get more accurate results. This feature has been one of the most anticipated and was previously introduced by competitors such as DALLE and Stable Diffusion.
Inpainting allows you to create interesting results that are specially customized for certain tasks. For example, if there are five cheerful people in the generated image, Inpainting can be used to make one of them sad by simply selecting his face and setting the appropriate prompt. Previously, users faced problems when trying to explain their idea to the neural network, as even a slight change in the prompt could radically alter the image. Now, thanks to the new Inpainting feature, users can utilize it to its full potential.
To use the new Midjourney features, enable Remix mode in the neural network settings. To do this, enter the /setting command in chat with the bot and select the appropriate menu item. Without this mode enabled, the Vary Region feature will change the selected zone at will, preventing the user from changing the promt.
Next, you need to create the original image to be modified. Then, various buttons will appear under the image, among which you should select "Vary (Region)".
In the window that opens, the user can manually select the area to be regenerated. This can be done by using the usual rectangular selection or by using the lasso. After that, you should enter your request in the window that appears below. For example, "Plane" or "Smile". Note that particularly complex queries that the neural network may not understand in this mode, such as "Plane" or "Smile", may result in a poorly blended result. After clicking the "Submit" button, the bot will provide four variants of a new picture combining the previous generation and the changes made.
From experience, it has been found that to achieve more accurate results with minimal artifacts, you should select an area slightly larger than the object you want to change. For example, if the goal is to add a smile to a sad face, it is recommended to select the entire head of the character, not just the mouth. This applies to inanimate objects as well.
Another important difference between Inpainting and Photoshop Firefly is that Inpainting has difficulty processing objects in the background. In the generation presented above, about 5 attempts were made, but the result was still very different from the original image. At the same time, the foreground objects are modified with more realism and attention to detail, which should be taken into account when formulating a proper query to the neural network.
Ailib neural network catalog. All information is taken from public sources.
Advertising and Placement: [email protected] or t.me/fozzepe