Presentation Attack Detection (Liveness / PAD)

Techniques and tests that detect spoofed biometric samples (e.g., masks, replays, synthetics) to ensure the sample is from a live, consenting subject.

Overview

Presentation Attack Detection (PAD) protects biometric systems from spoofs such as printed photos, silicone fingerprints, recorded voices, or AI-generated samples. It’s a cross-cutting layer used with face, voice, fingerprint, iris and other modalities.

How it works

  1. Capture: Sensor or camera acquires the sample.
  2. Signal analysis: Algorithms look for cues inconsistent with live traits (e.g., texture, reflectance, micro-motions, audio artifacts).
  3. Decision & score: The PAD subsystem outputs a score or decision (bona fide vs attack).
  4. Policy: Systems combine PAD with biometric matching and business rules to accept/deny or request step-up verification.

Common use cases

  • Remote onboarding / selfie match
  • Contactless border checks
  • KYC and high-risk transactions
  • Access control and workforce auth

Strengths and limitations

Strengths: Mitigates common spoofs; complements matching; standard metrics for evaluation.
Limitations: Attack diversity; new synthetic media; environment variability; false rejections at strict thresholds.

Key terms

  • APCER/BPCER: Core PAD error metrics from ISO/IEC 30107-3.
  • PAI (Presentation Attack Instrument): The artifact used to attack.
  • Attack potential: Effort/resources required to mount an attack.

References