Title: Effective and Robust Physical-World Attacks on Deep Learning Face Recognition Systems
Abstract: Deep neural networks (DNNs) are increasingly applied in face recognition (FR) systems. However, recent studies demonstrate that DNNs are susceptible to adversarial examples, which can mislead DNN-based FR systems in the physical world. Current attacks either generate perturbations effective only in the digital world or depend on specialized equipment to produce perturbations that lack robustness in dynamic physical environments. In this paper, we introduce FaceAdv, a physical-world attack employing adversarial stickers to deceive FR systems. FaceAdv comprises a sticker generator and a convertor: the generator crafts stickers of various shapes, while the convertor digitally applies these stickers to human faces and provides feedback to enhance the generator's effectiveness. We conduct extensive evaluations to assess FaceAdv's effectiveness against three typical FR systems (ArcFace, CosFace, and FaceNet). Results demonstrate that FaceAdv significantly improves the success rates of both dodging and impersonating attacks compared to a state-of-the-art attack. Additionally, we perform comprehensive evaluations to confirm FaceAdv's robustness.
Authors: Hao Yu (National University of Defense Technology
Title: Debiasing Machine Unlearning with Counterfactual Examples
Abstract: The right to be forgotten seeks to safeguard individuals from the enduring effects of their historical actions by implementing machine unlearning techniques. These techniques facilitate the deletion of previously acquired knowledge without requiring model retraining. However, they often overlook the biases introduced during unlearning process. These biases emerge from two main sources: (1) data-level bias, resulting from uneven data removal, and (2) algorithm-level bias, which leads to the contamination of the remaining dataset, thereby degrading model accuracy. In this work, we take a causal perspective to machine unlearning process and propose methods to mitigate biases at both data and algorithmic levels. Besides, we guide the forgetting procedure by leveraging counterfactual examples, as they maintain semantic data consistency without hurting performance on the remaining dataset. Experiments show that our method outperforms existing machine unlearning baselines.
Authors: Ziheng Chen (Walmart Global Tech); Jin Huang (Stony Brook University); Xinyi Li (University of Wisconsin); Lalitesh Morishetti (Walmart Global Tech); Kaushiki Nag (Walmart Global Tech); Jun Zhuang (Boise State University); Fabrizio Silvestri (Sapienza University of Rome); Gabriele Tolomei (Sapienza University of Rome)
Title: Explainable AI in Request-for-Quote
Abstract: In the contemporary financial landscape, accurately predicting the probability of filling a Request-For-Quote (RFQ) is crucial for improving market efficiency for less liquid asset classes. This paper explores the application of explainable AI (XAI) models to forecast the likelihood of RFQ fulfillment. By leveraging advanced algorithms including Logistic Regression, Random Forest, XGBoost and Bayesian Neural Tree, we are able to improve the accuracy of RFQ fill rate predictions and generate the most efficient quote price for market makers. XAI serves as a robust and transparent tool for market participants to navigate the complexities of RFQs with greater precision.
Authors: Qiqin Zhou (Cornell University)
Title: DifCluE: Generating Counterfactual Explanations with Diffusion Autoencoders and modal clustering
Abstract: Generating multiple counterfactual explanations for different modes within a class presents a significant challenge, as these modes are distinct yet converge under the same classification. Diffusion probabilistic models (DPMs) have demonstrated a strong ability to capture the underlying modes of data distributions. In this paper, we harness the power of a Diffusion Autoencoder to generate multiple distinct counterfactual explanations. By clustering in the latent space, we uncover the directions corresponding to the different modes within a class, enabling the generation of diverse and meaningful counterfactuals. We introduce a novel methodology, DifCluE, which consistently identifies these modes and produces more reliable counterfactual explanations. Our experimental results demonstrate that DifCluE outperforms the current state-of-the-art in generating multiple counterfactual explanations, offering a significant advancement in model interpretability.
Authors: Amit Sangroya (TCS Research); Suparshva Jain (TCS Research); Lovekesh Vig (Jawaharla Nehru University)
Title: Info-CELS: Informative Saliency Map Guided Counterfactual Explanation for Time Series Classification
Abstract: As machine learning models become more prevalent, the importance of making their decisions understandable to humans has risen sharply. This has sparked the development of Explainable Artificial Intelligence (XAI), a field dedicated to ensuring that AI systems are transparent and trustworthy by providing clear, human-friendly explanations for their decisions. Recently, a novel counterfactual explanation model for time series classification, CELS, has been introduced. CELS learns a saliency map for the interest of an instance and generates a counterfactual explanation guided by the learned saliency map. While CELS represents the first attempt to exploit learned saliency maps not only to provide intuitive explanations for the reason behind the decision made by the time series classifier but also to explore post hoc counterfactual explanations, it exhibits limitations in terms of high validity for the sake of ensuring high proximity and sparsity. In this paper, we present an enhanced approach that builds upon CELS to deal with low validity issues. Our proposed method addresses this limitation by removing mask normalization to provide more informative and valid counterfactual explanations. Through extensive experimentation on datasets from various domains, we demonstrate that our approach outperforms the CELS model, achieving higher validity and producing more informative explanations.
Authors: Peiyu Li (Utah State University); Omar Bahri (Utah State University); Soukaina Filali Boubrahimi (Utah State University); Shah Muhammad Hamdi (Utah State University)
Title: Power Learning: Differentially private embeddings for collaborative learning with tabular data
Abstract: Traditional collaborative learning approaches are based on sharing of model weights between clients and a server. However, there are advantages to resource efficiency through schemes based on sharing of activations. Several differentially private methods for weight sharing were developed while such private mechanisms do not exist so far for sharing of activations. We propose Power-Learning to learn a privacy encoding network such that the final activations generated from it are equipped with formal differential privacy guarantees. These privatized activations are then shared with a more powerful server, that learns a post-processing that results in a higher accuracy for machine learning tasks. We show that our co-design of collaborative and private learning results in requiring only one round of privatized communication from the resource constrained client to the well-equipped server while requiring lesser compute on the client than traditional methods.
Authors: Kaustubh Ponkshe (Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI)); Praneeth Vepakomma (Massachusetts Institute of Technology (MIT), Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI))