This patent describes a novel system and method for image inpainting, where missing or undesirable regions of an input image are filled in using guidance from a separate "guide image." The core innovation lies in the use of machine learning models, particularly a Style Generative Adversarial Network (StyleGAN), to combine visual features from both the input image and the guide image in a deep latent space. This approach aims to generate inpainted content that is not only consistent with the remaining parts of the input image but also incorporates desirable visual characteristics from the guide image, offering greater control and improved quality, especially for large or complex missing regions.