Program
Workshop Chairs
Keynote Speaker
The Future of Large Language Models (and AI) is Federated
ABSTRACT: As established scaling laws indicate, the future performance improvements of LLMs depend on the amount of computing and data sources we can leverage. Where will we get the necessary compute and data to drive the continued advances in LLMs that the world now has grown to expect? I believe all roads lead to federated learning. Federated and de-centralized approaches to machine learning will be how the strongest LLMs (and foundation models more generally) are trained in the relatively near future, and in time, we will see federated as one of the core enablers of the entire AI revolution. In this talk, I will describe why the future of AI will be federated, and describe early solutions developed by Flower and CaMLSys that address the underlying technical challenges in the world shifting from a centralized data-center mindset to de-centralized alternatives that can facilitate the continued scaling of AI capabilies.
Bin: Nic Lane (http://niclane.org) is a full Professor in the department of Computer Science and Technology, holds a Royal Academy of Engineering Chair in De-centralized AI, and is a Fellow of St. John's College, at the University of Cambridge. Nic also leads the Cambridge Machine Learning Systems Lab (CaMLSys -- http://http://mlsys.cst.cam.ac.uk/). Alongside his academic appointments, he is the co-founder and Chief Scientific Officer of Flower Labs (https://flower.dev/), a venture-backed AI company (YCW23) behind the Flower framework. Nic has received multiple best paper awards, including ACM/IEEE IPSN 2017 and two from ACM UbiComp (2012 and 2015). In 2018 and 2019, he (and his co-authors) received the ACM SenSys Test-of-Time award and ACM SIGMOBILE Test-of-Time award for pioneering research, performed during his PhD thesis, that devised machine learning algorithms used today on devices like smartphones. Nic was the 2020 ACM SIGMOBILE Rockstar award winner for his contributions to “the understanding of how resource-constrained mobile devices can robustly understand, reason and react to complex user behaviors and environments through new paradigms in learning algorithms and system design.”
Toturial Speaker

Enabling Resource-efficient mobile and embedded System with cross-level optimization 
Panel:主题
Panel Chair

Report Title
Panelist
oral presentation session chair
3 oral presentations lasting around 6 minutes each
Session Chair
1.Paper:Deep Learning Inference on Heterogeneous Mobile Processors: Potentials and Pitfalls

2.Paper:Robust Control of Quadruped Robots using Reinforcement Learning and Depth Completion Network

3.Paper:Enhancing Physical-Layer Key Generation Accuracy through Deep Learning-Based Hardware Calibration

4.Poster:AdaOper: Energy-efficient and Responsive Concurrent DNN Inference on Mobile Devices

5.Demo: Implementation and Benchmark of Magnetic Tracking on Mobile Platforms

Call For Papers
The emerging field of artificial intelligence of things (AIoT, AI+IoT) is driven by the widespread use of intelligent mobile & embedded infrastructures and the impressive success of deep learning(DL). With the deployment of DL on various intelligent embedded infrastructures featuring rich sensors and weak DL computing capabilities, a diverse range of AIoT applications has become possible. However, DL models are notoriously resource-intensive. Existing research strives to realize near-/realtime inference of AIoT live data and low-cost training using AIoT datasets on resource-scare infrastructures. Accordingly, the accuracy and responsiveness of DL models are bounded by resource availability. To this end, the algorithm-system co-design that jointly optimizes the resource-friendly DL models and model-adaptive system scheduling improves the runtime resource availability and thus pushes the performance boundary set by the standalone level. The cross-level optimization landscape involves various granularity, including sensor data acquisition, DL model compression, computation graph, operator, memory schedule, or hardware instructor in both on-device and distributed networked paradigms. It adaptively scales the cross-level AIoT system from on-device to distributed schemes for achieving better performance-resource efficiency trade-off. The distributed AIoT devices collaborate on demand for two motivations: On-demand sensing source association and on-demand computing resource aggregation. Furthermore, due to the dynamic nature of AIoT context, which includes heterogeneous hardware, agnostic sensing data, varying user-specified performance demands, and resource constraints, this workshop also explores the context-aware inter-/intra-device automatic cross-level adaptation.

This workshop serves as an excellent platform for researchers, system developers, and practitioners to explore the design, development, deployment, and operational challenges of adaptive AIoT systems. Contributions in theoretical and practical aspects, including embedded and edge AI systems, on-device machine learning, and methodological issues in AIoT systems, are highly encouraged.


The focus areas include, but not limited to:

✔Mobile connectivity communication paradigms
✔ Novel AIoT applications
✔ Techniques and systems for novel human-machine interactions and experiences
✔ Intelligent data processing in AIoT systems
✔ Adaptive perception and computing in resource-constrained environments
✔ Distributed multi-modal data fusion
✔ Real-time computing and data privacy in AIoT
✔ Personalized AI model evolutions on IoT devices
✔ Computational robustness in dynamic AIoT systems
✔ Energy-efficient designs and strategies in AIoT
✔ AI-based edge resource scheduling
✔ Lightweight LLM models in AIoT
✔ Lightweight Vision Transformer models in AIoT


Workshop papers will be included with the MobiSys proceedings and posted in the ACM Digital Library. The workshop will run as a hybrid event in the time zone of the conference’s original venue (i.e. CET)
Location
Tokyo, Japan
important Dates
Submission deadline:
April 12, 2024

Notification of acceptance:
April 19, 2024

Camera-ready deadline:
May 1, 2024

Workshop date:
June 7, 2024
Organizing Committee
Workshop Chairs
Technical Program Committee

▪️Fawad Ahmad, Rochester Institute of Technology
▪️Marco Levorato, University of California, Irvine
▪️Po-Han Huang, Meta
▪️Kwame-Lante Wright, Applied Intuition
▪️Kittipat Apicharttrisorn, Nokia Bell Labs
▪️Dong Ma, Delft University of Technology
▪️Shijia Pan, UC Merced
▪️Wan Du, UC Merced
▪️Zimu Zhou, City University of Hong Kong
▪️Edith Ngai, The University of Hong Kong
▪️Guohao Lan, Delft University of Technology
▪️Zhenyu Yan,The Chinese University of Hong Kong
▪️Jia Liu, Nanjing University
▪️Xingda Wei, Shanghai Jiao Tong University
▪️Xiuzhen Guo, Zhejiang University
Submission Instructions
Submissions to this workshop will be reviewed by the workshop TPC to ensure the quality standards of ACM MobiSys Workshop. AdaAIoTSys follows a single-blind review process, but all submissions must be original work not under review at any other workshop, conference, or journal.

Latex Template
Submissions must be submitted as a 2-page PDF (as a poster) or a 6-page PDF (as a short paper) in the double-column MobiSys format.

Submission Site
The papers can be submitted at: https://adaaiotsys24.hotcrp.com/. Submission is anonymous.

For questions and further information, please contact:
Dr. Sicong Liu, [email protected]
Dr. Hang Qiu, [email protected]
Main Conference Location
Toranomon Hills, Minato City

Processing Registration...