INTRODUCTION

The way research and business manage and utilize knowledge is undergoing a significant transformation, driven by Artificial Intelligence (AI). Deep learning and machine learning are emerging as a powerful tool for optimizing such knowledge management systems, leading to a more informed and productive development. AI offers a unique solution for organizations struggling with information overload and inefficient knowledge transfer. These AI models can significantly improve data management and utilization. Imagine an AI-powered system that streamlines onboarding processes, provides precise answers to various queries, and even captures the valuable tacit knowledge (implicit skills and expertise) often residing somewhere else. AI bridges the gap between explicit knowledge (easily documented information) and tacit knowledge, fostering a more comprehensive and accessible knowledge base. However, such AI systems solicit a trustworthy and responsible solution that can cater the potential misuse and malfunction. In this workshop, we aim to gather researchers and engineers from academia and industry to discuss the latest advances for trustworthy and responsible AI solutions for information and knowledge management systems.

SUBMISSION

Authors are invited to submit original, full-length research papers that are not previously published, accepted to be published, or being considered for publication in any other forum. Full-length papers should satisfy the standard requirements of top-tier international research conferences.

Manuscripts should be submitted to CIKM 2024 Easychair site in PDF format, using the 2-column ACM sigconf template, see https://www.acm.org/publications/proceedings-template. Full papers cannot exceed 9 pages, including an appendix, plus unlimited references (paper content is limited to 9 pages, that means that if you have an appendix, then it should be included within that page limit. It is also ok if you do not have an appendix and instead 9 pages of content). The review of manuscripts will be double-blind, and submissions not properly anonymized will be desk-rejected without review.

Papers that include text generated from a large-scale language model (LLM), such as ChatGPT, are prohibited unless this produced text is presented as a part of the paper’s experimental analysis. AI tools may be used to edit and polish authors’ work, such as using LLMs for light editing of their text (e.g., automate grammar checks, word autocorrect, and other editing of author-written text), but text “produced entirely” by generative/AI models is not allowed.

All the papers should be submitted using Easychair website: https://easychair.org/my/conference?conf=traicikm2024. At least one author of each accepted paper must register to present the work on-site in Boise, Idaho, USA, as scheduled in the conference program.

Selected high quality papers will be recommended for publication at high-impact international journals such as IEEE trans, after further extensions and revisions.

We welcome submission from different aspects of trustworthy and responsible AI for information and knowledge management systems (IKMS), including but not limited to
  • Theoretical understanding of trustworthy machine learning, such as trustworthy graph learning, trustworthy federated learning and so on
  • Trustworthy AI-supported knowledge management
  • Trustworthy and responsible AI for search and recommendation
  • Misinformation detection
  • AI ethics and its impacts on knowledge management
  • Reflective applications/demos of trustworthy ML for knowledge management

IMPORTANT DATES

**(All deadlines are at 11:59 pm in the Anywhere on Earth timezone.)**
Paper submission deadline: August 18th, 2024
Notification to Authors: September 13th, 2024
Conference date: October 25th, 2024

WORKSHOP PROGRAM

INVITED SPEAKERS

Keynote Speaker: Prof. Beiyu Lin

Beiyu Lin is an Assistant Professor in Computer Science at the University of Oklahoma. She constructs computational models based on ambient data to identify people’s routine behavior patterns and assess their behavior changes; she also develops algorithms to apply these findings to diverse areas, such as personalized healthcare. The impact of her multi-disciplinary research collaborations has been reported in multiple publications. Beiyu has received several honors and awards, including Best Applied Data Science Paper Award SIAM International Conference on Data Mining 2021 and People Choice Award for Young Professional Poster Competition, IEEE Rising Stars, 2022. Beiyu has also been actively serving the community. She would like to help the next generation of students become data mining and educational leaders.

ACCEPTED PAPER

Title: Effective and Robust Physical-World Attacks on Deep Learning Face Recognition Systems

Abstract: Deep neural networks (DNNs) are increasingly applied in face recognition (FR) systems. However, recent studies demonstrate that DNNs are susceptible to adversarial examples, which can mislead DNN-based FR systems in the physical world. Current attacks either generate perturbations effective only in the digital world or depend on specialized equipment to produce perturbations that lack robustness in dynamic physical environments. In this paper, we introduce FaceAdv, a physical-world attack employing adversarial stickers to deceive FR systems. FaceAdv comprises a sticker generator and a convertor: the generator crafts stickers of various shapes, while the convertor digitally applies these stickers to human faces and provides feedback to enhance the generator's effectiveness. We conduct extensive evaluations to assess FaceAdv's effectiveness against three typical FR systems (ArcFace, CosFace, and FaceNet). Results demonstrate that FaceAdv significantly improves the success rates of both dodging and impersonating attacks compared to a state-of-the-art attack. Additionally, we perform comprehensive evaluations to confirm FaceAdv's robustness.
Authors: Hao Yu (National University of Defense Technology

Title: Debiasing Machine Unlearning with Counterfactual Examples

Abstract: The right to be forgotten seeks to safeguard individuals from the enduring effects of their historical actions by implementing machine unlearning techniques. These techniques facilitate the deletion of previously acquired knowledge without requiring model retraining. However, they often overlook the biases introduced during unlearning process. These biases emerge from two main sources: (1) data-level bias, resulting from uneven data removal, and (2) algorithm-level bias, which leads to the contamination of the remaining dataset, thereby degrading model accuracy. In this work, we take a causal perspective to machine unlearning process and propose methods to mitigate biases at both data and algorithmic levels. Besides, we guide the forgetting procedure by leveraging counterfactual examples, as they maintain semantic data consistency without hurting performance on the remaining dataset. Experiments show that our method outperforms existing machine unlearning baselines.
Authors: Ziheng Chen (Walmart Global Tech); Jin Huang (Stony Brook University); Xinyi Li (University of Wisconsin); Lalitesh Morishetti (Walmart Global Tech); Kaushiki Nag (Walmart Global Tech); Jun Zhuang (Boise State University); Fabrizio Silvestri (Sapienza University of Rome); Gabriele Tolomei (Sapienza University of Rome)

Title: Explainable AI in Request-for-Quote

Abstract: In the contemporary financial landscape, accurately predicting the probability of filling a Request-For-Quote (RFQ) is crucial for improving market efficiency for less liquid asset classes. This paper explores the application of explainable AI (XAI) models to forecast the likelihood of RFQ fulfillment. By leveraging advanced algorithms including Logistic Regression, Random Forest, XGBoost and Bayesian Neural Tree, we are able to improve the accuracy of RFQ fill rate predictions and generate the most efficient quote price for market makers. XAI serves as a robust and transparent tool for market participants to navigate the complexities of RFQs with greater precision.
Authors: Qiqin Zhou (Cornell University)

Title: DifCluE: Generating Counterfactual Explanations with Diffusion Autoencoders and modal clustering

Abstract: Generating multiple counterfactual explanations for different modes within a class presents a significant challenge, as these modes are distinct yet converge under the same classification. Diffusion probabilistic models (DPMs) have demonstrated a strong ability to capture the underlying modes of data distributions. In this paper, we harness the power of a Diffusion Autoencoder to generate multiple distinct counterfactual explanations. By clustering in the latent space, we uncover the directions corresponding to the different modes within a class, enabling the generation of diverse and meaningful counterfactuals. We introduce a novel methodology, DifCluE, which consistently identifies these modes and produces more reliable counterfactual explanations. Our experimental results demonstrate that DifCluE outperforms the current state-of-the-art in generating multiple counterfactual explanations, offering a significant advancement in model interpretability.
Authors: Amit Sangroya (TCS Research); Suparshva Jain (TCS Research); Lovekesh Vig (Jawaharla Nehru University)

Title: Info-CELS: Informative Saliency Map Guided Counterfactual Explanation for Time Series Classification

Abstract: As machine learning models become more prevalent, the importance of making their decisions understandable to humans has risen sharply. This has sparked the development of Explainable Artificial Intelligence (XAI), a field dedicated to ensuring that AI systems are transparent and trustworthy by providing clear, human-friendly explanations for their decisions. Recently, a novel counterfactual explanation model for time series classification, CELS, has been introduced. CELS learns a saliency map for the interest of an instance and generates a counterfactual explanation guided by the learned saliency map. While CELS represents the first attempt to exploit learned saliency maps not only to provide intuitive explanations for the reason behind the decision made by the time series classifier but also to explore post hoc counterfactual explanations, it exhibits limitations in terms of high validity for the sake of ensuring high proximity and sparsity. In this paper, we present an enhanced approach that builds upon CELS to deal with low validity issues. Our proposed method addresses this limitation by removing mask normalization to provide more informative and valid counterfactual explanations. Through extensive experimentation on datasets from various domains, we demonstrate that our approach outperforms the CELS model, achieving higher validity and producing more informative explanations.
Authors: Peiyu Li (Utah State University); Omar Bahri (Utah State University); Soukaina Filali Boubrahimi (Utah State University); Shah Muhammad Hamdi (Utah State University)

Title: Power Learning: Differentially private embeddings for collaborative learning with tabular data

Abstract: Traditional collaborative learning approaches are based on sharing of model weights between clients and a server. However, there are advantages to resource efficiency through schemes based on sharing of activations. Several differentially private methods for weight sharing were developed while such private mechanisms do not exist so far for sharing of activations. We propose Power-Learning to learn a privacy encoding network such that the final activations generated from it are equipped with formal differential privacy guarantees. These privatized activations are then shared with a more powerful server, that learns a post-processing that results in a higher accuracy for machine learning tasks. We show that our co-design of collaborative and private learning results in requiring only one round of privatized communication from the resource constrained client to the well-equipped server while requiring lesser compute on the client than traditional methods.
Authors: Kaustubh Ponkshe (Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI)); Praneeth Vepakomma (Massachusetts Institute of Technology (MIT), Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI))

ORGANIZATION COMMITTEES

Ye Zhang

University of Pittsburgh

Panfeng Li

University of Michigan

Zikai Lin

University of Michigan, Ann Arbor

Ziyao Liu

Nanyang Technological University

Shenghai Zhong

BeiHang University

Zhiyu An

University of California, Merced

Yuning Chen

University of California, Merced

Jingxiao Tian

University of California San Diego

Jiawen Wen

The University of Sydney

Yan Sun

Georgia Institute of Technology

Jinyang Li

University of Michigan

Ziheng Chen

Stony Brook University

Zeyu Cao

Stony Brook University

Jianbing Dong

Nvidia

Jun Wu

Georgia Institute of Technology

Nan Yang

The University of Sydney

Xuesong Ye

Georgia Institute of Technology

Zhixin Lai

Cornell University

Yiqiao Li

University of Technology Sydney

Kexin Wu

Independent Researcher

Yi Liu

Monash University

Kangrui Ruan

Columbia University

Guang Yang

North Carolina State University

Qingwen Zeng

The University of Sydney

ORGANIZATION COMMITTEES

WEB CHAIR

JiawenWen

Jiawen Wen

The University of Sydney

CONTACT US

Point of Contact:

Huaming Chen, The University of Sydney

Email: huaming.chen@sydney.edu.au