Approximation algorithms for orienteering and discounted-reward TSP

2003 
In this paper, we give the first constant-factor approximation algorithm for the rooted orienteering problem, as well as a new problem that we call the Discounted-Reward TSP, motivated by robot navigation. In both problems, we are given a graph with lengths on edges and prizes (rewards) on nodes, and a start node s. In the orienteering problem, the goal is to find a path that maximizes the reward collected, subject to a hard limit on the total length of the path. In the Discounted-Reward TSP, instead of a length limit we are given a discount factor /spl gamma/, and the goal is to maximize total discounted reward collected, where reward for a node reached at time t is discounted by /spl gamma//sup t/. This is similar to the objective considered in Markov decision processes (MDPs) except we only receive a reward the first time a node is visited. We also consider tree and multiple-path variants of these problems and provide approximations for those as well. Although the unrooted orienteering problem, where there is no fixed start node s, has been known to be approximable using algorithms for related problems such as k-TSP (in which the amount of reward to be collected is fixed and the total length is approximately minimized), ours is the first to approximate the rooted question, solving an open problem based on B. Awerbuch et al. (1999) and E.M. Arkin (1998).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    0
    Citations
    NaN
    KQI
    []