Competition-based user expertise score estimation
- Jing Liu ,
- Young-In Song ,
- Chin-Yew Lin
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information |
Published by ACM - Association for Computing Machinery | Organized by ACM
In this paper, we consider the problem of estimating the relative expertise score of users in community question and answering services (CQA). Previous approaches typically only utilize the explicit question answering relationship between askers and answerers and apply link analysis to address this problem. The implicit pairwise comparison between two users that is implied in the best answer selection is ignored. Given a question and answering thread, it’s likely that the expertise score of the best answerer is higher than the asker’s and all other non-best answerers’. The goal of this paper is to explore such pairwise comparisons inferred from best answer selections to estimate the relative expertise scores of users. Formally, we treat each pairwise comparison between two users as a two-player competition with one winner and one loser. Two competition models are proposed to estimate user expertise from pairwise comparisons. Using the NTCIR-8 CQA task data with 3 million questions and introducing answer quality prediction based evaluation metrics, the experimental results show that the pairwise comparison based competition model significantly outperforms link analysis based approaches (PageRank and HITS) and pointwise approaches (number of best answers and best answer ratio) for estimating the expertise of active users. Furthermore, it’s shown that pairwise comparison based competition models have better discriminative power than other methods. It’s also found that answer quality (best answer) is an important factor to estimate user expertise.
© ACM. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version can be found at http://dl.acm.org.