Apple has announced a significant update to its artificial intelligence features, branded as Apple Intelligence, which will now include metadata labels for images created or altered by AI. This initiative aligns Apple with other tech giants like OpenAI, Adobe, Google, and Microsoft in promoting transparency around AI-generated content.
Apple Intelligence, the latest suite of AI tools from the tech giant, allows users to generate new emojis, edit photos, and create images from simple text prompts or uploaded photos. As part of its commitment to transparency, Apple will embed metadata in each AI-created or altered image, indicating its artificial origin. This measure is designed to help users identify AI-generated images easily.
Craig Federighi, Apple's Senior Vice President of Software Engineering, discussed the importance of this initiative on a recent podcast hosted by prominent blogger John Gruber. Federighi emphasized that even simple photo edits, such as removing a background object, will be marked in the metadata to show the image has been altered.
"We make sure to mark up the metadata of the generated image to indicate that it's been altered," said Federighi. He further clarified that Apple's technology is not intended for creating realistic images of people or places, but rather for enhancing user creativity and simplifying photo edits.
This move by Apple follows similar efforts by companies like TikTok, OpenAI, Microsoft, and Adobe, which have introduced digital watermarks to help identify AI-created or manipulated content. These initiatives come in response to growing concerns about the misuse of AI to spread misinformation.
Media and information experts warn that despite these efforts, the challenge of AI-generated misinformation is likely to intensify, especially with the upcoming 2024 US presidential election. The term "slop" has been coined to describe the realistic but false information produced by AI.
The rise of AI tools has democratized content creation, making it easier for people with minimal technical knowledge to create text, videos, and audio. While this democratization has many benefits, it also poses significant risks in terms of the spread of misinformation.
Apple's proactive step in labelling AI-generated images is part of a broader industry effort to maintain the integrity of information and help users discern the authenticity of digital content.