Adobe has trained an AI to detect photoshopped images

Adobe has trained an AI to detect when images have been photoshopped which could help in the fight against deepfakes.

The software giant partnered with researchers from the University of California on the AI. A convolutional neural network (CNN) was trained to spot changes made to images using Photoshop’s Face-Aware Liquify feature.

Face-Aware Liquify is a feature designed to adjust and exaggerate facial features. Photoshop automatically detects facial features to provide an easy way to adjust them as required.

Here’s an example of Face-Aware Liquify in action:

© Adobe

Features like Face-Aware Liquify are relatively harmless when used for purposes like making someone appear happier in a photo or ad. However, such features can also be exploited – for example, making a political opponent appear to express emotions like anger or disgust in a bid to sway voter opinion.

“This new research is part of a broader effort across Adobe to better detect image, video, audio and document manipulations,” Adobe wrote in a blog post on Friday.

Last week, AI News reported that a deepfake video of Facebook CEO Mark Zuckerberg had gone viral. Zuckerberg appeared to say: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

The deepfake wasn’t intended to be malicious but rather part of a commissioned art installation by Canny AI designed to highlight the dangers of such fake videos. Other videos on display included President Trump and Kim Kardashian, individuals with huge amounts of influence.

A month prior to the release of the Zuckerberg deepfake, a video of House Speaker Nancy Pelosi was being spread on Facebook which portrayed her as being intoxicated. Facebook was criticised for refusing to remove the video.

Pelosi later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

In another example of the growing dangers of deepfakes, a spy was caught using an AI-generated profile pic on LinkedIn to connect with unbeknownst targets.

Humans are pretty much hardwired to believe what they see with their eyes, that’s what makes deepfakes so dangerous. AIs like the one created by Adobe and the team at the University of California will be vital to help counter the deepfake threat.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

Leave a Reply

Your email address will not be published. Required fields are marked *