In response to worries about deepfakes and digital manipulation, Google Photos is apparently exploring a feature that may detect AI-generated or improved photographs.
Users will be able to verify whether or not an image was created or modified with artificial intelligence (AI) thanks to a new feature that Google Photos is apparently implementing. According to the article, new ID resource tags are being added to the photo and video sharing and storage service, which will disclose the image’s AI information and digital source type. This capability is probably being developed by Google in an effort to lower the number of deepfakes. It’s unclear, though, how users will see the information.
In recent years, deepfakes have become a new kind of digital manipulation. These are pictures, movies, audio files, and other comparable material that have either been altered through different methods or digitally created using artificial intelligence in order to disseminate false information or deceive people. star Amitabh Bachchan, for example, recently sued a business owner for using deepfake video advertisements in which the star was seen endorsing the company’s goods.
Users will be able to determine whether a photograph in their gallery was produced digitally thanks to a new feature in the Google Photos app, according to a report by the Android Authority. Version 7.3 of the Google Photos app was found to include the capability. Those using the most recent version of the app won’t be able to view this just yet, though, since it is not an active feature.
A new tool for Google Photos that would assist users in determining if a picture was produced or altered by artificial intelligence is apparently in the works. In an effort to stop the proliferation of deepfakes, a recent report from the Android Authority claims that the photo-sharing and storage app will soon include tags that indicate whether a picture is artificial intelligence (AI)-generated or digitally altered. Although it isn’t currently available to consumers, this functionality was seen in Google Photos 7.3, the most recent version.
The publication discovered additional XML code strings referring to this development within the layout files. These are ID resources—identifiers that are linked to a particular resource or element inside the application. According to reports, one of them had the term “ai_info,” which is thought to be a reference to the data that was added to the photographs’ metadata. If the picture was created using an AI tool that complies with transparency guidelines, this area needs to be labelled.
But as of right now, it’s unclear how Google intends to present this data. To make it harder to tamper with, it would be ideal if it were included in the Exchangeable Image File Format (EXIF) data that is contained within the image. The drawback of such is that consumers will need to visit the metadata page in order to easily view this information. As an alternative, the app may, like Meta on Instagram, include an on-image flag to identify AI photographs.
For photographs created with Google Gemini, Google itself applies a “Made with Google AI” credit tag, which can be seen in the image’s EXIF data. An image is labelled “AI-Generated with Google” if you alter it using Google Photos’ AI-powered Magic Editor feature.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.