Meta’s AI under fire as it’s trained on billions of Facebook & Instagram photos
MetaMeta’s new AI image generator trains on publicly available Instagram and Facebook photos, raising concerns among people about their privacy.
Not to be outdone by DALL-E and Midjourney, Meta today announced its own standalone AI image generator on the web, called Imagine with Meta. It’s free to use (for now) and generates images based on text prompts.
However, Meta’s image generator hasn’t been well-received by everyone. Concerns about user privacy have arisen, as the AI tool is trained on billions of images from Facebook and Instagram. This has led some people to worry about what they post online.
Meta says it excluded private images
Like every other AI model out there, Imagine with Meta has trained on publicly available data, including the billions of photos on Facebook and Instagram.
Although Meta says it excluded private posts shared only with family and friends to respect consumers’ privacy, the public is still not very happy.
One person voiced their concern on Reddit, stating:
“Funny how Disney has more rights to [pictures] of Mickey Mouse than you have to your own photos.”
While the tool offers unique and creative image generation capabilities, the sheer volume of personal data it is based on raises questions about data security and user control.
Some folks are concerned about whether they should be posting on Facebook and Instagram, with one Reddit user saying:
“This is one of the reasons I haven’t posted a picture of my face to social media in 10+ years.”
Meta’s new image generator is currently available in the US. It generates four images per prompt and they all come with a visible watermark for increased transparency and traceability.
The company also claims to be adding an invisible watermark to AI-generated images, which will be “resilient to common image manipulations like cropping, resizing, color change (brightness, contrast, etc.), screenshots, image compression, noise, sticker overlays and more.”