In recent years, breakthroughs from the field of deep learning have transformed how sensor data (e.g. images, audio, and even accelerometers and GPS) can be interpreted to extract the high-level information needed by bleeding-edge sensor-driven systems like smartphone apps, wearable devices and driverless cars. Today, the state-of-the-art in computational models that, for example, recognize a face, track user emotions, or monitor physical activities are increasingly based on deep learning principles and algorithms. Unfortunately, deep models typically exert severe demands on local device resources and this conventionally limits their adoption within mobile and embedded platforms. As a result, in far too many cases existing systems process sensor data with machine learning methods that have been superseded by deep learning years ago.
Because the robustness and quality of sensory perception and reasoning is so critical to mobile computing, it is critical for this community to begin the careful study of two core technical questions. First, how should deep learning principles and algorithms be applied to sensor inference problems that are central to this class of computing? This includes a combination of applications of learning, some of which are familiar to other domains (such as image and audio processing), in addition to those more uniquely tied to wearable and mobile systems (e.g. activity recognition and distributed federated learning). Second, what is required for current -- and future -- deep learning innovations to be either simplified or efficiently integrated into a variety of mobile resource-constrained systems? This spans from efficiency-boosting techniques for existing models, up to the design of resouce-efficient deep architectures, and down to novel hardware design for mobile processors for the optimized deployment of deep learning workloads. At heart, this MobiCom 2020 co-located workshop aims to consider these two broad themes; this year we will also focus on the emerging area of Federated Learning with associated talks and research papers. More specific topics of interest, include, but are not limited to:
- Resource-efficient Federated Learning
- Compression of Deep Model Architectures
- Neural-based Approaches for Modeling User Activities and Behavior
- Quantized and Low-precision Neural Networks (including Binary Networks)
- Mobile Vision supported by Convolutional and Deep Networks
- Optimizing Commodity Processors (GPUs, DSPs, etc.) for Deep Models
- Audio Analysis and Understanding through Recurrent and Deep Architectures
- Hardware Accelerators for Deep Neural Networks
- Distributed Deep Model Training Approaches
- Applications of Deep Neural Networks with Real-time Requirements
- Deep Models of Speech and Dialog Interaction or Mobile Devices
- Partitioned Networks for Improved Cloud- and Processor-Offloading
- OS Support for Resource Management at Inference Time
-
Keynote Speakers
Christos Bouganis
Imperial College LondonKoen Helwegen
Plumerai ResearchAdrià Gascón
Google
Important Dates
- Paper Submission Deadline:
May 15th - 11:59PM AOEJune 5th - 11:59PM AOE
- Paper Submission Deadline:
- Author Notification:
June 29th - WiP and Demo Deadline:
May 30th - 11:59PM AOE - Workshop Event:
Friday, 25th Sept 2020