AI News Hub Logo

AI News Hub

Offline Constrained Reinforcement Learning under Partial Data Coverage

stat.ML updates on arXiv.org
Seokmin Ko, Ambuj Tewari, Kihyuk Hong

arXiv:2505.17506v2 Announce Type: replace Abstract: We study offline constrained reinforcement learning with general function approximation in discounted constrained Markov decision processes. Prior methods either require full data coverage for evaluating intermediate policies, lack oracle efficiency, or requires the knowledge of data-generating distribution for policy extraction. We propose PDOCRL, an oracle-efficient primal-dual algorithm based on a decomposed linear-programming formulation that makes the policy an explicit optimization variable. This avoids policy extraction that requires the knowledge of data-generating distribution, and only uses standard policy-optimization, online linear-optimization, and linear-minimization oracles. We show that saddle-point formulations using general function approximation can have spurious saddle points even when an optimal solution is realizable, and identify a stronger realizability condition under which every restricted saddle point is optimal. Under this condition and partial coverage of an optimal policy, PDOCRL returns a near-optimal, near-feasible policy with a \(\widetilde{\mathcal O}(\epsilon^{-2})\) sample guarantee, without access to the data-generating distribution. Empirically, PDOCRL is competitive with strong baselines on standard offline constrained RL benchmarks.