Multi-modal Extreme Classification
- Anshul Mittal ,
- Kunal Dahiya ,
- Shreya Malani ,
- Janani Ramaswamy ,
- Seba Kuruvilla ,
- Jitendra Ajmera ,
- Keng-hao Chang ,
- Sumeet Agarwal ,
- Purushottam Kar ,
- Manik Varma
This paper develops the MUFIN technique for extreme classification (XC) tasks with millions of labels where datapoints and labels are endowed with visual and textual descriptors. Applications of MUFIN to product-to-product recommendation and bid query prediction over several millions of products are presented. Contemporary multi-modal methods frequently rely on purely embedding-based methods. On the other hand, XC methods utilize classifier architectures to offer superior accuracies than embeddingonly methods but mostly focus on text-based categorization tasks. MUFIN bridges this gap by reformulating multimodal categorization as an XC problem with several millions of labels. This presents the twin challenges of developing multi-modal architectures that can offer embeddings sufficiently expressive to allow accurate categorization over millions of labels; and training and inference routines that scale logarithmically in the number of labels. MUFIN develops an architecture based on cross-modal attention and trains it in a modular fashion using pre-training and positive and negative mining. A novel product-to-product recommendation dataset MM-AmazonTitles-300K containing over 300K products was curated from publicly available Alpha XR listings with each product endowed with a title and multiple images. On the MM-AmazonTitles-300K and Polyvore datasets, and a dataset with over 4 million labels curated from click logs of the Bing search engine, MUFIN offered at least 3% higher accuracy than leading text-based, image-based and multi-modal techniques.