Navigation

Image & Video Forensics

Multimedia Data is an integral component of our daily communication. Smartphones and other low-cost consumer devices allow to view, capture, and share images, video and audio in real time and with minimum effort. Captured multimedia content is used for communication, entertainment, or record-keeping.
From a security perspective, these wide-spread use cases raise questions of authenticity and origin of controversial content. Our group provides algorithmic tools to these common questions on multimedia content:

  • has an image or video been edited, and if so, how, where, and to what extend?
  • is it possible to link an image or video to a particular user?
  • what information can be forensically retrieved from degraded multimedia data
    or physical documents?
  • how can multimedia content be actively armored to support post-hoc analysis
    of manipulations or attribution?

A quick collection of our past and present lines of research:

  • Detection of DeepFakes and Face2Face-manipulated videos
  • Estimation of the direction of incident light
  • Video analysis with noise residuals
  • Estimation of the color of the illuminant
  • Format-based manipulation detection
  • Copy-move forgery detection
 Recent publications:

IH&MMSec 2019: Image Forensics from Chroma Subsampling of High-Quality JPEG Images
The JPEG compression format provides a rich source of forensic traces that include quantization artifacts, fingerprints of the container format, and numerical particularities of JPEG compressors. Such a diverse set of cues serves as the basis for a forensic examiner to determine origin and authenticity of an image. In this work, we present a novel artifact that can be used to fingerprint the JPEG compression library. The artifact arises from chroma subsampling in one of the most popular JPEG implementations. Due to integer rounding, every second column of the compressed chroma channel appears on average slightly brighter than its neighboring columns, which is why we call the artifact a chroma wrinkle. We theoretically derive the chroma wrinkle footprint in DCT domain, and use this footprint for detecting chroma wrinkles.
paper

DFRWS EU 2019: Forensic Source Identification using JPEG Image Headers: The Case of Smartphones
A common problem in forensic investigations is the identification of the source of a multimedia item, i.e., determining the model, make or individual device that recorded the media content. In this work, we focus on JPEG header information, which involves metadata like EXIF tags and JPEG parameters. While traditional digital cameras typically had a fixed software stack that makes it straightforward to fingerprint a device, modern mobile devices vary considerably in their software stack over time. We perform a large-scale study of JPEG header information from Apple smartphones to investigate the effect of this development on the possibility to perform source identification. Our analysis shows that identification of the concrete hardware is much harder for smartphones than it is for traditional cameras. However, it is well feasible to identify the operating system version and selected apps from the software stack.
paper

IVFWS @ IEEE WACV 2019: Exploiting Visual Artifacts to Expose Deepfakes and Face Manipulations
High quality face editing in videos is a growing concern and spreads distrust in video content. However, upon closer examination, many face editing algorithms exhibit artifacts that resemble classical computer vision issues that stem from face tracking and editing. As a consequence, we wonder how difficult it is to expose artificial faces from current generators? To this end, we review current facial editing methods and several characteristic artifacts from their processing pipelines. We also show that relatively simple visual artifacts can be already quite effective in exposing such manipulations, including Deepfakes and Face2Face. Since the methods are based on visual features, they are easily explicable also to non-technical experts. The methods are easy to implement and offer capabilities for rapid adjustment to new manipulation types with little data available. Despite their simplicity, the methods are able to achieve AUC values of up to 0.866.
paper

IEEE ICASSP 2019: Towards Learned Color Representations for Image Splicing Detection
Since the rise of social media, it is an ongoing challenge to devise forensic approaches that are highly robust to common processing operations such as JPEG recompression and downsampling. In this work, we make a first step towards a novel type of cue for image splicing. It is based on the color formation of an image: we assume that the color formation is a joint fingerprint for the camera hardware, the software settings, and the depicted scene. As such, it can be used to locate spliced patches that originally stem from other images. To this end, we train a two-stage classifier on the full set of colors from a Macbeth color chart, and compare two patches for their color consistency. Our preliminary results on a challenging dataset on downsampled data of identical scenes indicate that the color distribution can be a useful forensic tool that is highly resistant to JPEG compression.
paper

GI Sicherheit 2018: Towards Forensic Exploitation of 3-D Lighting Environments in Practice
A well-known physics-based approach for image forensics is to validate the distribution of incident light on objects of interest. Inconsistent lighting environments are considered as an indication of image splicing. However, one drawback of this approach is that it is quite challenging to use it in practice. In this work, we propose several practical improvements to this approach. First, we propose a new
way of comparing lighting environments. Second, we present a factorization of the overall error into its individual contributions, which shows that the biggest error source are incorrect geometric fits.
Third, we propose a confidence score that is trained from the results of an actual implementation. The confidence score allows to define an implementation- and problem-specific threshold for the
consistency of two lighting environments.
paper

IEEE ICIP 2017: Residual-Based Forensic Comparison of Video Sequences
Video content can be acquired with off-the-shelf hardware, and is thus increasingly used to record events. With the growing role of video data for communicating to a large audience, we need tools to
ensure the authenticity of video content. However, until now, only few methods exist to forensically analyze videos. In this work, we propose a method for statistically comparing two video sequences. Per sequence, intra- and inter-frame residuals are computed. Optical flow is used to compensate for motion artifacts on inter-frame residuals. We use one sequence to build a statistical model, and compare it to the second sequence. From a forensic perspective, the proposed method enables two applications. First, manipulations can be accurately localized if both sequences are subsequences of the same video. Second, source cameras can be distinguished if both sequences stem from different videos.
paper

TyrWS 2017: Illumination Analysis in Physics-based Image Forensics: A Joint Discussion of Illumination Direction and Color
Illumination direction and color are two physics-based forensic cues that are based on the same underlying model. In this work, we discuss these methods in the light of their joint physical model, with a particular focus on the limitations and a qualitative study of failure cases of these methods. Our goal is to provide directions for future research to further reduce the list of constraints that these methods require in order to work. We hope that this eventually broadens the applicability of physics-based methods, and to spread their main advantage, namely their stringent models for deviations of the expected image formation.
paper

MTAP 2016: Handling Multiple Materials for Exposure of Digital Forgeries using 2-D Lighting Environments
The distribution of incident light is an important physics-based cue for exposing image manipulations. If an image has been composed from multiple sources, it is likely that the illumination environments of the spliced objects differ. Johnson and Farid introduced a proof-of-principle algorithm for a forensic comparison of lighting environments. However, this baseline approach suffers from relatively strict assumptions that limit its practical applicability. In this work, we address one of the biggest limitations, namely the need to compute a lighting environment from patches of homogeneous material. To compute a lighting environment from multiple-color surfaces, we propose a method that we call “€œintrinsic contour estimation” (ICE). ICE is able to integrate reflectances from multiple materials into one lighting environment, as long as surfaces of different materials share at least two similar normal vectors. We validate the proposed method in a controlled ground-truth experiment on two datasets, with light from three different directions. These experiments show that using ICE can improve the median estimation error by almost 50%, and the mean error by almost 30%.
paper

IEEE TIFS 2013: Exposing Digital Image Forgeries by Illumination Color Classification
In this paper, we investigate image splicing. We propose a forgery detection method that exploits subtle inconsistencies in the color of the illumination of images. Our approach is machine-learning-based and requires minimal user interaction. The technique is applicable to images containing two or more people and requires no expert interaction for the tampering decision. To achieve this, we incorporate information from physics- and statistical-based illuminant estimators on image regions of similar material. From these illuminant estimates, we extract texture- and edge-based features which are then provided to a machine-learning approach for automatic decision-making.
paper

IEEE TIFS 2012: An Evaluation of Popular Copy-Move Forgery Detection Approaches
A copy-move forgery is created by copying and pasting content within the same image, and potentially postprocessing it. In recent years, the detection of copy-move forgeries has become one of the most actively researched topics in blind image forensics. A considerable number of different algorithms have been proposed focusing on different types of postprocessed copies. In this paper, we aim to answer which copy-move forgery detection algorithms and processing steps (e.g., matching, filtering, outlier detection, affine transformation estimation) perform best in various postprocessing scenarios. The focus of our analysis is to evaluate the performance of previously proposed feature sets. We achieve this by casting existing algorithms in a common pipeline. In this paper, we examined the 15 most prominent feature sets. We analyzed the detection performance on a per-image basis and on a per-pixel basis. We created a challenging real-world copy-move dataset, and a software framework for systematic image manipulation. Experiments show, that the keypoint-based features SIFT and SURF, as well as the block-based DCT, DWT, KPCA, PCA and ZERNIKE features perform very well. These feature sets exhibit the best robustness against various noise sources and downsampling, while reliably identifying the copied regions.
paper

DAGM and OAGM Pattern Recognition Symposium 2012: Automated Image Forgery Detection through Classification of JPEG Ghosts
We present a method for automating the detection of the so-called JPEG ghosts. JPEG ghosts can be used for discriminating single-and double JPEG compression, which is a common cue for image manipulation detection. The JPEG ghost scheme is particularly well-suited for non-technical experts, but the manual search for such ghosts can be both tedious and error-prone. In this paper, we propose a method that automatically and efficiently discriminates single- and double-compressed regions based on the JPEG ghost principle. Experiments show that the detection results are highly competitive with state-of-the-art methods, for both, aligned and shifted JPEG grids in double-JPEG compression.
paper

IEEE WIFS 2010: On Rotation Invariance in Copy-Move Forgery Detection
The goal of copy-move forgery detection is to find duplicated regions within the same image. Copy-move detection algorithms operate roughly as follows: extract blockwise feature vectors, find similar feature vectors, and select feature pairs that share highly similar shift vectors. This selection plays an important role in the suppression of false matches. However, when the copied region is additionally rotated or scaled, shift vectors are no longer the most appropriate selection technique. In this paper, we present a rotation-invariant selection method, which we call Same Affine Transformation Selection (SATS). It shares the benefits of the shift vectors at an only slightly increased computational cost. As a byproduct, the proposed method explicitly recovers the parameters of the affine transformation applied to the copied region.
paper

Information Hiding 2010: Scene Illumination as an Indicator of Image Manipulation
We propose illumination color as a new indicator for the assessment of image authenticity. Many images exhibit a combination of multiple illuminants (flash photography, mixture of indoor and outdoor lighting, etc.). In the proposed method, the user selects illuminated areas for further investigation. The illuminant colors are locally estimated, effectively decomposing the scene in a map of differently illuminated regions. Inconsistencies in such a map suggest possible image tampering. Our method is physics-based, which implies that the outcome of the estimation can be further constrained if additional knowledge on the scene is available. Experiments show that these illumination maps provide a useful and very general forensics tool for the analysis of color images.
paper

GI Sicherheit 2010: A Study on Features for the Detection of Copy-Move Forgeries
One of the most popular image forensics methods is the detection of copy-move forgeries. In the past years, more than 15 different algorithms have been proposed for copy-move forgery detection. So far, the efficacy of these approaches has barely been examined. In this paper, we: a) present a common pipeline for copy-move forgery detection, b) perform a comparative study on 10 proposed copy-move features and c) introduce a new benchmark database for copy-move forgery detection. Experiments show that the recently proposed Fourier-Mellin features perform outstandingly if no geometric transformations are applied to the copied region. Furthermore, our experiments strongly support the use of kd-trees for the matching of similar blocks instead of lexicographic sorting.
paper