On Thursday, Google Photos announced the addition of labels that indicate if a picture has been altered with the use of artificial intelligence (AI) capabilities. Google will begin adding this information to the picture metadata so that anybody may quickly determine whether the image was created artificially. In addition to highlighting photographs that have been artificially altered, Google photographs will also show whether an image was created by combining many images using non-generative technologies. For Pixel-specific features like Add Me and Best Take, the latter will be utilised.
The computer sector has an obligation to at least be as open as possible about the usage of these technologies, but there is no way to stop generative AI from permanently eroding our faith in photographs. In order to do this, Google has said that Google Photos would begin to identify when a picture was altered using artificial intelligence (AI) next week.
The tech giant is adding AI data to the metadata in accordance with the technical guidelines set forth by the International Press Telecommunications Council (IPTC). In contrast, Meta and OpenAI adopt the Coalition for Content Provenance and Authenticity (C2PA) standard.
Google Photos’ engineering director, John Fisher, stated in a blog post that “photos edited with tools like Magic Editor, Magic Eraser, and Zoom Enhance already include metadata based on technical standards from The International Press Telecommunications Council (IPTC) to indicate that they’ve been edited using generative AI.” “We’re going one step further now, displaying this information in the Photos app alongside details like the file name, location, and backup status.”
The company described its new transparency feature in a blog post. Only photos that have been altered with Google Photos’ AI tools, including Magic Editor and Magic Eraser, will receive these AI labels. The business hasn’t started if it would also mark photos that have been altered with the aid of outside AI techniques, though.
Google will now include this information in the photo file’s metadata each time a user uses AI technologies within the app to improve an image. This has the advantage that metadata information cannot be eliminated; it will remain visible even if a picture is cropped or obscured. If a screenshot of the image is captured, it will create new Exchangeable Image File Format (EXIF) file data, therefore this won’t be helpful. This data will be included at the bottom of the “AI Info” page. This will include a “Digital Source Type” that indicates if generative AI or another technique was used to change the image, along with credits to the tool that was used. Both on the site and in the app, Google Photos’ picture details page includes the “AI info” area.
The label will provide particular information about even the photos that have been heavily modified without the use of generative AI, like the Best Take or Add Me features on Pixel smartphones that are compatible.
Furthermore, these designations won’t be restricted to generative AI alone. According to Google, it will also indicate when a “photo” incorporates components from many photographs, as happens when users utilize the Pixel’s Add Me and Best Take functions. It’s good to see that. However, individuals who consciously attempt to do so can simply get around this metadata.
To increase openness surrounding AI updates, Fisher noted, “This work is not done, and we’ll continue to gather feedback and evaluate additional solutions.”
Up until now, end consumers have essentially been unable to see the metadata that is connected to Google’s AI capabilities. When I first started causing a lot of trouble with Magic Editor’s Reimagine tool, which allows you to add AI-generated things to an image that were never there in the original scene, I was worried about Google Photos’ lack of any clear label indicating that this was created using AI. With their own AI technologies, Google and Samsung both allow you to achieve this. However, Apple has stated that it is deliberately avoiding photo-realistic material, and iOS 18.2 will introduce its first image-generation tools. According to Craig Federighi of Apple, the business has been “concerned” about AI raising questions about whether images are “indicative of reality.”
As we’ve discussed several times by now, photo editing is nothing new. However, the latest generation of generative AI technologies allows for the creation of convincing fakery with minimal expertise or effort.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.