Dr. Nguyễn Tùng Anh’s research lies at the intersection of federated learning, TinyML, small language models, and optimization for distributed AI. He focuses on communication- and computation-efficient learning, convex optimization, and distributed optimization, with a strong emphasis on privacy-preserving and resource-constrained deployment.
He has contributed to federated PCA on Grassmann manifolds for anomaly detection in IoT networks, robust federated learning under distribution shifts, and spatio-temporal data modeling for large-scale systems.
His work has appeared in venues such as IEEE/ACM Transactions on Networking, IEEE INFOCOM, and SIAM SDM, addressing challenges in efficient model personalization, large-scale multivariate time-series anomaly detection, and federated deep equilibrium learning. He has collaborated with researchers from The University of Sydney, Korea University, and international partners, and his research has been cited more than 60 times since 2020.