Stable Diffusion XL 1.0 requires less computing resources, making it more efficient and available for use on computers with limited capabilities. The creators of the generative model reported that the updated version of the artificial intelligence significantly improved its performance in solving the task at hand.
Stability AI has officially released a new version of its generative neural network Stable Diffusion XL 1.0 (SDXL 1.0). This version is presented in open source software format and is available to everyone. The developers claim that SDXL 1.0 requires less performance hardware, so it may attract more fans than the previous version 0.9. SDXL 1.0 is available on the GitHub platform with all configurations and files, and as a web application on the Clipdrop and DreamStudio platforms.
In an interview for TechCrunch, Stability AI developer Joe Penn revealed that the new version of the generative neural network features vibrant colors, accurate color reproduction, improved contrast, more detailed shadows, and advanced handling of lighting. SDXL 1.0 is considered the most advanced generative neural network on the market. It was developed using 3.5 billion parameters and is capable of producing 1MP (720p) images in seconds.
Developers report that the Stable Diffusion XL 0.9 model had the ability to generate high-resolution images, but required powerful computers to operate. However, according to SiliconANGLE, the new SDXL 1.0 version can be run even on simple systems and produces acceptable results, making the model more affordable than its competitors. In addition, Stability AI representatives report significant improvements in text content generation technology. Nowadays, even the best generative networks can successfully generate images with different text captions or logos, which was previously a problem.
SDXL 1.0 has provided a solution to this problem. The model is now capable of producing text in a clear and legible manner, making generative images with predominantly textual content much more pleasing to look at. In addition, the model has been updated with inpainting, which allows the model to repair damaged or missing elements in an image, and outpainting, which conditionally expands the footprint in the frame by adding new details to the image. The new version of the model is also capable of handling complex text instructions consisting of several separate prompts.
Ailib neural network catalog. All information is taken from public sources.
Advertising and Placement: [email protected] or t.me/fozzepe