RStudio AI Blog: Deepfake detection challenge from R

Deepfake detection challenge from R

A couple of months ago, Amazon, Facebook, Microsoft, and other contributors initiated a challenge consisting of telling apart real and AI-generated (“fake”) videos. We show how to approach this challenge from R.

Turgut Abdullayev, QSS Analytics - Aug. 18, 2020


Working with video datasets, particularly with respect to detection of AI-based fake objects, is very challenging due to proper frame selection and face detection. To approach this challenge from R, one can make use of capabilities offered by OpenCV, magick , and keras .

Our approach consists of the following consequent steps:

  • read all the videos
  • capture and extract images from the videos
  • detect faces from the extracted images
  • crop the faces
  • build an image classification model with Keras

Let’s quickly introduce the non-deep-learning libraries we’re using. OpenCV is a computer vision library that includes:

On the other hand, magick is the open-source image-processing library that will help to read and extract useful features from video datasets:

  • Read video files
  • Extract images per second from the video
  • Crop the faces from the images

Before we go into a detailed explanation, readers should know that there is no need to copy-paste code chunks. Because at the end of the post one can find a link to Google Colab with GPU acceleration. This kernel allows everyone to run and reproduce the same results.


This topic was automatically closed after 42 days. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.