Helping Mobile Apps Bootstrap with Fewer Users
- Xuan Bao ,
- Aman Kansal ,
- Romit Roy Choudhury ,
- Victor Bahl ,
- David Chu ,
- Alec Wolman
The 14th International Conference on Ubiquitous Computing (Ubicomp 2012) |
Published by ACM
A growing number of mobile apps are exploiting smartphone sensors to infer user behavior, activity, or context. For instance, an app may infer from the accelerometer that its user is inside a car; another app may infer if the user is at a dance party from sound, light, and motion information from multiple users. Regardless of what is being inferred, these apps require training (i.e., the raw sensor data need to be initially labeled with the ground truth, such as “driving” or “dance party”). Obtaining labeled data for new mobile sensing apps is proving to be a “chicken and egg” problem. Users who install such apps are usually not willing to help with labeling – they demand immediate service. Without a reasonable amount of labeling, the apps are not able to perform inference, and are not worth installing. This paper aims to address this problem, helping mobile apps to bootstrap with just a few users. Our core intuition is that even though each user may be different, they may exhibit similar patterns on certain sensing dimensions some of the time. For instance, different users may walk and drive at different speeds, but certain speeds will indicate driving for all users. These common patterns could be used as “seeds” to model the new user, and label her data on all other dimensions. We prototype a technique to automatically extract the commonalities to seed models for new users and learn a unique personalized inference model for each user. We evaluate the proposed technique through example apps and real world data.