Building Effective Representations for Sketch Recognition
- Jun Guo ,
- Changhu Wang ,
- Hongyang Chao
The Twenty-Ninth AAAI Conference on Artificial Intelligence |
As the popularity of touch-screen devices, understanding
a user’s hand-drawn sketch has become an increasingly
important research topic in artificial intelligence
and computer vision. However, different from natural
images, the hand-drawn sketches are often highly abstract,
with sparse visual information and large intraclass
variance, making the problem more challenging.
In this work, we study how to build effective representations
for sketch recognition. First, to capture
saliency patterns of different scales and spatial arrangements,
a Gabor-based low-level representation is proposed.
Then, based on this representation, to discovery
more complex patterns in a sketch, a Hybrid Multilayer
Sparse Coding (HMSC) model is proposed to learn midlevel
representations. An improved dictionary learning
algorithm is also leveraged in HMSC to reduce overfitting
to common but trivial patterns. Extensive experiments
show that the proposed representations are highly
discriminative and lead to large improvements over the
state of the arts.