GlimpseData: Towards Continuous Vision-Based Personal Analytics
- Seungyeop Han ,
- Rajalakshmi Nandakumar ,
- Matthai Philipose ,
- Arvind Krishnamurthy ,
- David Wetherall
Workshop on Physical Analytics |
Published by ACM - Association for Computing Machinery
Emerging wearable devices provide a new opportunity for mobile context-aware applications to use continuous audio/video sensing data as primitive inputs. Due to the highdatarate and compute-intensive nature of the inputs, it is important to design frameworks and applications to be efficient. We present the GlimpseData framework to collect and analyze data for studying continuous high-datarate mobile perception. As a case study, we show that we can use lowpowered sensors as a filter to avoid sensing and processing video for face detection. Our relatively simple mechanism avoids processing roughly 60% of video frames while missing only 10% of frames with faces in them.
Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full cita-tion on the first page. Copyrights for components of this work owned by others thanACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-publish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from permissions@acm.org.