Apple has released an open-source AI model called MLLM-Guided Image Editing (MGIE). It can edit an image based on the instructions provided by a user in the text format.
MGIE can make major as well as minor changes to images. For example, you can ask it to edit an image of pizza to make the dish look healthier, and it will add vegetable toppings, such as tomatoes and herbs, to make the pizza look that way, or you can ask it to crop, resize, and rotate images or change brightness, contrast, and color balance of images and it will perform those tasks. The Cupertino-based tech giant has also made MGIE available for you to try out on Hugging Face Spaces.
Apple has developed MGIE in collaboration with researchers from the University of California. To interpret the user’s instructions, it uses multimodal large language models (MLLMs). Currently, there’s no info if the iPhone maker will use this AI model in any of its products. Currently, Apple is lagging behind many tech brands, including Google and Microsoft, when it comes to using AI in products. The CEO of the company, Tim Cook, had recently said that it is investing heavily in AI.