DUAL Distributed compUting, optimizAtion, and Learning (DUAL) group at USyd

Research

Our research focuses on cutting-edge areas in artificial intelligence, machine learning, and distributed systems. We investigate novel approaches to improve AI efficiency, fairness, and interpretability, with particular emphasis on large language models, federated learning, and edge computing.

Our team explores both theoretical foundations and practical applications, contributing to the advancement of intelligent systems that can operate effectively in real-world, resource-constrained environments.

Highlighted

Distributionally Robust Wireless Semantic Communication with Large AI Models
Long Tan Le, Senura Hansaja Wanasekara, Zerun Niu, Yansong Shi, Nguyen H. Tran, …, Walid Saad, Dusit Niyato, Zhu Han, Choong Seon Hong, H. Vincent Poor
arXiv  ·  01 Jan 2024  ·  doi:10.48550/ARXIV.2506.03167
A distributionally robust approach for wireless semantic communication with large AI models that addresses uncertainty in wireless channels and semantic information transmission, enhancing reliability in AI-powered communication systems.

All

2024

Lossy Compression of Multi-Channel EEG and PPG Signals Based on Golomb-Rice Coding with Parameter Estimation
Senura Hansaja Wanasekara, Han Huy Dung, Ngoc Hung Nguyen, Van-Dinh Nguyen
2024 International Conference on Advanced Technologies for Communications (ATC)  ·  17 Oct 2024  ·  doi:10.1109/atc63255.2024.10908307
Federated PCA on Grassmann Manifold for IoT Anomaly Detection
Tung-Anh Nguyen, Long Tan Le, Tuan Dung Nguyen, Wei Bao, Suranga Seneviratne, Choong Seon Hong, Nguyen H. Tran
arXiv  ·  10 Jul 2024  ·  arxiv:2407.07421
A federated learning framework combining Koopman operator theory and reservoir computing for large-scale multivariate time-series anomaly detection. The method leverages dynamic system modeling to detect anomalies in distributed time-series data while preserving privacy.
$i$REPO: $i$mplicit Reward Pairwise Difference based Empirical Preference Optimization
Long Tan Le, Han Shu, Tung-Anh Nguyen, Choong Seon Hong, Nguyen H. Tran
arXiv  ·  24 May 2024  ·  arxiv:2405.15230
iREPO introduces an implicit reward pairwise difference based empirical preference optimization framework for aligning large language models with human expectations. The method uses regression on preference pairs to optimize model alignment more effectively.
Federated Deep Equilibrium Learning: Harnessing Compact Global Representations to Enhance Personalization
Long Tan Le, Tuan Dung Nguyen, Tung-Anh Nguyen, Choong Seon Hong, Suranga Seneviratne, Wei Bao, Nguyen H. Tran
arXiv  ·  22 May 2024  ·  arxiv:2309.15659
A federated deep equilibrium learning approach that harnesses compact global representations to enhance personalization in federated settings. The method addresses the trade-off between personalization and global knowledge sharing through equilibrium point modeling.
Distributionally Robust Federated Learning for Mobile Edge Networks
Long Tan Le, Tung-Anh Nguyen, Tuan-Dung Nguyen, Nguyen H. Tran, Nguyen Binh Truong, Phuong L. Vo, Bui Thanh Hung, Tuan Anh Le
Mobile Networks and Applications  ·  01 Jan 2024  ·  doi:10.1007/s11036-024-02316-w
A distributionally robust federated learning framework designed for mobile edge networks that addresses data heterogeneity and distribution shifts in edge computing environments. The method enhances robustness and generalization capabilities in resource-constrained mobile edge settings.
Distributionally Robust Wireless Semantic Communication with Large AI Models
Long Tan Le, Senura Hansaja Wanasekara, Zerun Niu, Yansong Shi, Nguyen H. Tran, …, Walid Saad, Dusit Niyato, Zhu Han, Choong Seon Hong, H. Vincent Poor
arXiv  ·  01 Jan 2024  ·  doi:10.48550/ARXIV.2506.03167
A distributionally robust approach for wireless semantic communication with large AI models that addresses uncertainty in wireless channels and semantic information transmission, enhancing reliability in AI-powered communication systems.

2023

Federated PCA on Grassmann Manifold for Anomaly Detection in IoT Networks
Tung-Anh Nguyen, Jiayu He, Long Tan Le, Wei Bao, Nguyen H. Tran
arXiv  ·  01 Jan 2023  ·  arxiv:2212.12121
A federated principal component analysis (PCA) framework on Grassmann manifold for IoT anomaly detection. The method uses privacy-preserving federated learning to aggregate normal network behavior features across IoT devices, with ADMM-based gradient learning on Grassmann manifolds for fast training and low detection latency.
A New Look and Convergence Rate of Federated Multitask Learning With Laplacian Regularization
Canh T. Dinh, Tung T. Vu, Nguyen H. Tran, Minh N. Dao, Hongyu Zhang
IEEE Transactions on Neural Networks and Learning Systems  ·  01 Jan 2023  ·  doi:10.1109/TNNLS.2022.3224252
A novel federated multi-task learning framework using Laplacian regularization that unifies conventional federated learning, personalized federated learning, and federated multi-task learning. Proposes FedU and dFedU algorithms with proven convergence rates.

2022

On the Generalization of Wasserstein Robust Federated Learning
Tung-Anh Nguyen, Tuan Dung Nguyen, Long Tan Le, Canh T. Dinh, Nguyen H. Tran
arXiv  ·  02 Jun 2022  ·  arxiv:2206.01432
A Wasserstein distributionally robust optimization approach (WAFL) for federated learning that addresses non-i.i.d. data distribution problems. The method reformulates robust federated learning as empirical surrogate risk minimization with convergence guarantees, showing better generalization than FedAvg under distribution shifts.

2020

Personalized Federated Learning with Moreau Envelopes
Canh T. Dinh, Nguyen H. Tran, Tuan Dung Nguyen
arXiv  ·  16 Jun 2020  ·  arxiv:2006.08848
A novel personalized federated learning algorithm using Moreau envelopes that addresses statistical heterogeneity in federated learning by enabling each client to learn a personalized model while benefiting from the global knowledge.

2019

Federated Learning over Wireless Networks: Convergence Analysis and Resource Allocation
Canh T. Dinh, Nguyen H. Tran, Minh N. H. Nguyen, Choong Seon Hong, Wei Bao, Albert Y. Zomaya, Vincent Gramoli
arXiv  ·  29 Oct 2019  ·  arxiv:1910.13067
A comprehensive study of federated learning over wireless networks, providing convergence analysis and resource allocation optimization for mobile user equipments with heterogeneous data and physical resources.