INTRODUCTION

The Trustworthy Machine Learning towards Advanced Vision Systems workshop aims to gather researchers and engineers from academia and industry to discuss the latest advances in perception for trustworthy machine learning for advanced vision systems. In this half-day workshop, we will have regular paper presentations, invited speakers, and technical talks to present the current state of the art, as well as the limitations and future directions for trustworthy machine learning in computer vision, the most practical strategy to safeguard systems against adversary and vulnerabilities from multi angles.

SUBMISSION

Nominated computer vision tasks have nowadays performed on super-human level with the advances of deep learning techniques. However, adversarial machine learning researchers demonstrate that such vision systems could not yet be as robust as human systems. Acting as a new gamut of technologies, trustworthy machine learning covers the research and development of studying the intact capabilities and malicious behaviors of machine learning models in an adversarial scenario. The potential vulnerabilities of ML models to malicious attacks can result in severe consequences for safety-critical systems. One most known manner is via imperceptible perturbations to the input images/videos. Although it is not to be alarmist, researchers in machine learning and computer vision areas have a responsibility to preempt attacks and build safeguards especially when the task is critical for information security, and human lives (e.g., autonomous driving systems). We need to deepen our understanding of machine learning in adversarial environments.

While the negative implications of this nascent technology for computer vision have been widely discussed, researchers are yet to explore their positive opportunities in numerous aspects. Positive impacts of trustworthy machine learning are not limited to boost the robustness of ML models, but cut across several other domains including privacy protection, reliability and safety test, model understanding, improving generalisation performance in computer vision from different perspectives, etc.}

Since there are both positive and negative aspects of computer vision systems, addressing trustworthy machine learning to their use in such scenario in the right direction requires a framework to embrace the positives. This workshop aims to bring together researchers and practitioners from a variety of communities (e.g., computer vision, machine learning, and computer security) in an effort to synthesize promising ideas and research directions, as well as foster and strengthen cross-community collaborations on both theoretical studies and practical applications for advanced vision systems.

All the papers should be submitted using Easychair website: https://easychair.org/conferences/?conf=tmlavs24.

At least one author of each accepted paper must register to present the work on-site in Hanoi, Vietnam, as scheduled in the conference program.

Accepted papers will be included in the companion proceedings of ACCV 2024. Selected high quality papers will be recommended for publication at high-impact international journals such as IEEE trans, after further extensions and revisions.

We welcome submission from different aspects of trustworthy ML for computer vision systems, including but not limited to:
  • Theoretical understanding of trustworthy ML in computer vision systems
  • Adversarial/poisoned attacks against computer vision tasks
  • Adversarial defenses to improve computer vision system robustness
  • Methods of detecting/rejecting adversarial examples in computer vision tasks
  • Benchmarks to reliably evaluate defenses strategies
  • Adversarial ML in the real world
  • Reflective applications/demos of trustworthy ML for computer vision tasks

IMPORTANT DATES

**(All deadlines are at 11:59 pm in the Anywhere on Earth timezone.)**
Paper submission deadline: August 28th, 2024
Notification to Authors: September 20th, 2024
Registration: TBD
Conference date: October 11st, 2024

WORKSHOP PROGRAM

To be updated

INVITED SPEAKERS

To be updated

ORGANIZATION COMMITTEES

PROGRAM COMMITTEES

To be updated

WEB CHAIR

JiawenWen

Jiawen Wen

The University of Sydney

LinghanHuang

Linghan Huang

The University of Sydney

CONTACT US

Point of Contact:

Huaming Chen, The University of Sydney

Email: huaming.chen@sydney.edu.au