ºù«ÍÞÊÓƵ

Skip to main content

An Approximate Dynamic Programming Approach to Dynamic Stochastic Matching

Dynamic stochastic matching problems arise in a variety of recent applications, ranging from ridesharing and online video games to kidney exchange. Such problems are naturally formulated as Markov decision processes (MDPs) that are, however, intractable in general. To improve tractability, we investigate the linear programming-based approach to approximate dynamic programming. This approach can provide both feasible control policies and bounds on the MDPs' optimal policy value, which can be used to establish optimality gaps. However, the approximate linear programs (ALPs) resulting from this approach can often be difficult to solve. To address this computational challenge, we derive novel ALP reformulations that can be used for a broad class of dynamic stochastic matching problems that incorporate, among others, possible match failures and certain restrictions on feasible matchings. We show that these ALP reformulations can be solved efficiently and applied to a broad class of dynamic matching problems. In addition, our numerical results indicate that our ALP reformulations can produce tight bounds that allow us to establish near-optimal policy performance for a broad set of problem instances. Thus, ALP reformulations can present an attractive alternative for applications that involve dynamic stochastic matching.

You, Fan; Vossen, Thomas. An Approximate Dynamic Programming Approach to Dynamic Stochastic Matching. INFORMS Journal on Computing. Jul/Aug2024, Vol. 36 Issue 4, p1006-1022.