A non-parametric solution to the multi-armed bandit problem with covariates

2021 
Abstract In recent years, the multi-armed bandit problem regains popularity especially for the case with covariates since it has new applications in customized services such as personalized medicine. To deal with the bandit problem with covariates, a policy called binned subsample mean comparison that decomposes the original problem into some proper classic bandit problems is introduced. The growth rate in a setting that the reward of each arm depends on observable covariates is studied accordingly. When rewards follow an exponential family, it can be shown that the regret of the proposed method can achieve the nearly optimal growth rate. Simulations show that the proposed policy has the competitive performance compared with other policies.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    0
    Citations
    NaN
    KQI
    []