zikele

zikele

人生如此自可乐

成對採樣對比框架用於聯合物理-數字臉部攻擊檢測

2508.14980v1

中文标题#

成對採樣對比框架用於聯合物理 - 數字臉部攻擊檢測

英文标题#

Paired-Sampling Contrastive Framework for Joint Physical-Digital Face Attack Detection

中文摘要#

現代臉部識別系統仍然容易受到欺騙嘗試的攻擊,包括物理呈現攻擊和數字偽造。 傳統上,這兩種攻擊向量分別由單獨的模型處理,每個模型針對其自身的伪影和模態。 然而,維護不同的檢測器會增加系統複雜性和推理延遲,並使系統暴露於組合攻擊向量下。 我們提出了配對採樣對比框架,這是一種統一的訓練方法,利用自動匹配的真實和攻擊自拍圖像對來學習與模態無關的活體提示。 在第六屆臉部識別反欺騙挑戰統一物理 - 數字攻擊檢測基準上評估,我們的方法實現了平均分類錯誤率(ACER)為 2.10%,優於之前的方法。 該框架輕量級(4.46 GFLOPs),並在一小時內完成訓練,使其適用於實際部署。 代碼和預訓練模型可在 https://github.com/xPONYx/iccv2025_deepfake_challenge 獲取。

英文摘要#

Modern face recognition systems remain vulnerable to spoofing attempts, including both physical presentation attacks and digital forgeries. Traditionally, these two attack vectors have been handled by separate models, each targeting its own artifacts and modalities. However, maintaining distinct detectors increases system complexity and inference latency and leaves systems exposed to combined attack vectors. We propose the Paired-Sampling Contrastive Framework, a unified training approach that leverages automatically matched pairs of genuine and attack selfies to learn modality-agnostic liveness cues. Evaluated on the 6th Face Anti-Spoofing Challenge Unified Physical-Digital Attack Detection benchmark, our method achieves an average classification error rate (ACER) of 2.10 percent, outperforming prior solutions. The framework is lightweight (4.46 GFLOPs) and trains in under one hour, making it practical for real-world deployment. Code and pretrained models are available at https://github.com/xPONYx/iccv2025_deepfake_challenge.

文章页面#

成對採樣對比框架用於聯合物理 - 數字臉部攻擊檢測

PDF 獲取#

查看中文 PDF - 2508.14980v1

智能達人抖店二維碼

抖音掃碼查看更多精彩內容

載入中......
此文章數據所有權由區塊鏈加密技術和智能合約保障僅歸創作者所有。