INTRODUCTION

Emerging as a pivotal technique supporting a wide range of societal activities, such as autonomous transportation and health care, a trustworthy and responsible machine learning system (TRMLS) has become a focus for worldwide researchers. One main goal is to investigate the different principles and constraints for TRMLS to be applied in practice by a broad spectrum of researchers and practitioners. This workshop will focus on discussing the theories, principles, and experiences of developing trustworthy and responsible machine learning systems. This workshop will be the first attempt of gathering researchers interested in the emerging and interdisciplinary filed of trustworthy and responsible machine learning from a wide range of disciplines. This workshop will highlight the recent related works and foster unprecedented chance to bridge the research gaps across the topics of machine learning, security, fairness, privacy and so on. This workshop will conduct a reflection on foundations (theory and application) of trustworthy machine learning and lay out a positive vision for future collaboration and research activities.

SUBMISSION

For the poster session, we invite submissions of maximum 6 pages extended abstract including reference describing relevant work that is unpublished, recently published, or presented in the main conference which allows participants to share research ideas related to trustrworhty and responsible AI. The abstract should follow the IJCNN format (c.f. main conference authors guidelines). Papers will be selected based on relevance, significance and novelty of results, technical merit, and clarity of presentation. These submissions will be reviewed by invited reviewers. The submissions will be selected as oral presentation by the program committee. Concurrent submissions are allowed, but it is the responsibility of the authors to verify compliance with other venues' policies. Accepted papers for the workshop will have the chance to be published online as a volume of the CEUR Workshop Proceedings, a well-known website for publishing workshop proceedings. The inclusion will be consulted prior the next step.

All the papers should be submitted using CMT website: https://cmt3.research.microsoft.com/TRESAI2023/.

The topic should be related to trustworthy and responsible AI, including but not limited to:

  • Mathematical foundations of trustworthy and responsible AI (e.g., learning techniques, causality, information theory)
  • Trustworthy and responsible AI metrics and their interconnections
  • Theoretical understanding of trustworthy and responsible AI
  • Trustworthy and responsible AI in the real world
  • Trustworthy and responsible AI engineering practice
  • Trustworthy and responsible AI for large foundation models
  • Auditability & Accountability
  • Trustworthy human-computer interactions
  • Social Acceptance of trustworthy and responsible AI
  • Fairness and bias
  • Risk assessment and risk-aware decision making

IMPORTANT DATES (AOE time)

Paper submission deadline: June 1st, 2023
Notification to Authors: June 14th, 2023
Camera-ready Deadline: June 16th, 2023
Workshop date: June 18th, 2023

WORKSHOP PROGRAM

Title: Multi-choice Explanations: A New Cooperative Game Structure for XAI

Abstract: Cooperative game theorists propose the following attractive process: (1) capture the abstract value} of each possible coalition} of individuals, (2) write down some principles, or axioms, on how to distribute the value (e.g., allocate importance to features or parameters), and then, (3) find a set of allocations that satisfy the principles. The Shapley value has received much attention -- but it is just one solution concept, satisfying one set of principles, in one class of games. It is popular among game theorists because the axioms, and the class of TU-games, are reasonable in game theory. In AI and ML, we should choose carefully what is reasonable for our own purposes.
In this paper, we highlight solution concepts in the class of multi-choice games (MC-games). These are model agnostic, and unique to their own set of axioms, just like the Shapley value. This paper offers a general algorithm for constructing any MC-game framework with polynomial time complexity in the number of parameter levels, and an application of this algorithm that is transparent, and can be readily generalised to local explanation frameworks such as SHapley Additive exPlanations (SHAP).
Authors: Daniel Fryer (The University of Queensland); Hien Nguyen (University of Queensland & School of Mathematics and Physics); David Lowing (Université Paris-Saclay); Inga Strumke (NTNU)

Title: Reliable Emotion Recognition in Conversation: Quantifying and Communicating Uncertainty

Abstract: Emotion recognition in textual conversation (ERTC) is crucial for developing advanced conversational systems that can understand and support users' emotional needs. Despite significant progress in ERTC using deep learning techniques, the subjective nature of emotion and the lack of emotional cues in textual conversations pose challenges in building highly accurate systems. In this paper, we propose an uncertainty-aware approach to ERTC, employing an approximate Bayesian inference method to address the inherent uncertainty in emotion classification. We provide confidence metrics for individual predictions using conformal prediction and standard error mean (SEM), enabling downstream tasks to make informed decisions based on the level of confidence of each prediction. Our approach aims to enhance the reliability and trustworthiness of ERTC systems, paving the way for their wider adoption in real-world applications.
Authors: Samad Roohi (La Trobe University); Richard Skarbez (La Trobe University); Hien Nguyen (University of Queensland & School of Mathematics and Physics)

Title: An Intelligent Recommendation Method based on Multi-Interest Network and Responsible Deep Learning

Abstract: Recommender systems have shown to popular in many Internet communities, as they could help users discover interesting items based on their history behaviors. However, with the explosive growth of data-intensive tasks and online information, cybersecurity risks become larger, conventional collaborative recommendation algorithms may not meet users’ security requirements. Besides, the sparsity issue and the cold-start issue also hinder the performance of conventional recommendation methods. Recently, deep learning has shown to outperform traditional modelling techniques, which can be employed in Recommender systems (RSs) to improve user behavior prediction. In light of these challenges and observations, an intelligent recommendation method based on multi-interest network and responsible deep learning is proposed. It utilizes multi-source behavior information to improve prediction performance, where multi-view preference embeddings, including self-embeddings, interaction-aware embeddings, and neighbor-based embeddings, are combined to model users’ interests at a finer granularity. Specifically, two factorization techniques, Matrix factorization (MF) and Tensor Factorization (TF), are applied to mine local and global interactions between users and items for self-embedding learning. Moreover, interaction with context and neighbor-based interest are also considered to improve the modeling of user preferences. In neighbor-based embedding learning, a responsible search scheme is adopted to fast similarity searching and support privacy preservation. Finally, a DNN-based prediction mechanism is adopted for embedding aggregation and final prediction. Extensive experiments on real-world datasets show that our proposal achieves decent prediction performance with security concerns compared with state-of the-art baselines.
Authors: Shunmei Meng (Nanjing University of Science and Technology)*; Xiao Liu (Nanjing University of Science and Technology); Xuyun Zhang (Macquarie University)

Title: Enhancing Federated Learning Robustness in Adversarial Environment Through Clustering Non-IID Features

Abstract: Federated Learning (FL) enables many clients to train a joint model without sharing the raw data. While many byzantine-robust FL methods have been proposed, FL remains vulnerable to security attacks such as poisoning attacks and evasion attacks due to its distributed adversarial environment. Additionally, real-world training data used in FL are usually Non-Independent and Identically Distributed (Non-IID), which further weakens the robustness of the existing FL methods (such as Krum, Median, Trimmed-Mean, etc.), thereby making it possible for a global model in FL to be broken in extreme Non-IID scenarios. In this work, we mitigate the aforementioned weaknesses of existing FL methods in Non-IID and adversarial scenarios by proposing a new FL framework called Mini Federated Learning (Mini-FL). Mini-FL follows the general FL approach but considers the Non-IID sources of FL and aggregates the gradients by groups. Specifically, Mini-FL first performs unsupervised learning for the gradients received to define the grouping policy. Then, the server divides the gradients received into different groups according to the grouping policy defined and performs byzantine-robust aggregation. Finally, the server calculates the weighted mean of gradients from each group to update the global model. Owning the strong generality, Mini-FL can utilize the most existing byzantine-robust method. We demonstrate that Mini-FL effectively enhances FL robustness and achieves greater global accuracy than existing FL methods when against security attacks and in Non-IID settings
Authors: Yanli Li (The University of Sydney)

ORGANIZING COMMITTEES

Volunteers / Student Organizers

CONTACT US

Point of Contact:

Huaming Chen

The University of Sydney

Email: huaming.chen@sydney.edu.au