Enhanced Learning to Rank using Cluster-loss Adjustment

2019 
Most Learning To Rank (LTR) algorithms like Ranking SVM, RankNet, LambdaRank and LambdaMART use only relevance label judgments as ground truth for training. But in common scenarios like ranking of information cards (google now, other personal assistants), mobile notifications, netflix recommendations, etc. there is additional information which can be captured from user behavior and how user interacts with the retrieved items. Within the relevance labels, there might be different sets whose information (i.e. cluster information) can be derived implicitly from user interaction (positive, negative, neutral, etc.) or from explicit-user feedback ('Do not show again', 'I like this suggestion', etc). This additional information provides significant knowledge for training any ranking algorithm using two-dimensional output variable. This paper proposes a novel method to use the relevance label along with cluster information to better train the ranking models. Results for user-trial Notification Ranking dataset and standard datasets like LETOR 4.0, MSLR-WEB10K and YahooLTR further support this claim.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    2
    Citations
    NaN
    KQI
    []