A Model-Based Stochastic Augmented Lagrangian Method for Online Stochastic Optimization

In this paper, we focus on the online stochastic optimization problems in which the random parameters follow time-varying distributions. At each round t, decision is obtained from solving current optimization problem.Then samples are drawn from distributions which are updated after obtaining decision. The objective and constraint are updated in this process, and the updated problem is used to obtain the next decision. For solving the online stochastic optimization problem, we propose a model-based stochastic augmented Lagrangian method, which is referred to as MSALM. At each round, we construct the model functions for the sample objective and constraint functions based on their properties, which reduced the computational complexity. The step size is designed in a dynamic form and decreases as t increases to accelerate convergence. Due to the setting of the online stochastic problem, we use stochastic dynamic regret and constraint violation to measure the performance of our algorithm. Under the assumptions, we prove that our algorithm’s stochastic dynamic regret and constraint violation have a sublinear bound of total number of slots T. We design simulation experiments to verify the efficiency of our online algorithm. Its performance is evaluated on a range of information and system engineering problems, including adaptive filtering, online logistic regression, the time-varying smart grid energy dispatch, the online network resource allocation, and the path planning. In addition, in the context of the path planning problem, we integrate our algorithm with supervised learning to demonstrate its enhanced capabilities. The experimental results validate the performance of our new algorithm in practical applications.

Liked Liked