[ad_1]
In this fast-paced world that we live in and after the pandemic, many of us realised that having a pleasant environment like home to escape from reality is priceless and a goal to be pursued.
Whether you are looking for a Scandinavian, minimalist, or a glamorous style to decorate your home, it is not easy to imagine how every single object will fit in a space full of different pieces and colours. For that reason, we usually seek for professional help to create those amazing 3D images that help us understand how our future home will look like.
However, these 3D images are expensive, and if our initial idea does not look as good as we thought, getting new images will take time and more money, things that are scarce nowadays.
In this article, I explore the Stable Diffusion model starting with a brief explanation of what it is, how it is trained and what is needed to adapt it for inpainting. Finally, I finish the article with its application on a 3D image from my future home where I change the kitchen island and cabinets to a different colour and material.
As always, the code is available on Github.
What is it?
Stable Diffusion [1] is a generative AI model released in 2022 by CompVis Group that produces photorealistic images from text and image prompts. It was primarily designed to generate images influenced by text descriptions but it can be used for other tasks such as inpainting or video creation.
Its success comes from the Perceptual Image Compression step that converts a high dimensional image into a smaller latent space. This compression enables the use of the model in low-resourced machines making it accessible to everyone something that was not possible with the previous state-of-the-art models.
How does it learn?
[ad_2]
Source link