Research
Preprint
Q. Shi, J. Peng, K. Yuan, X. Wang and Q. Ling.
Optimal complexity in Byzantine-robust distributed stochastic optimization with data heterogeneity, 2025.
L. Jin, X. Wang and X. Chen.
Nonconvex nonsmooth multicomposite optimization and its applications to recurrent neural networks, 2025.
Y. Cui, S. Guo, X. Wang and X. Xiao.
A brief review of recent advances on chance constrained programs, 2024.
K. Li, L. Bai, X. Wang and H. Wang.
Anderson acceleration for nonsmooth optimization algorithms: local convergence via active manifold identification, 2024.
J. Ju, X. Wang and D. Xu.
Stochastic approximation algorithms for DR-submodular maximization with convex functional constraints, 2024.
D. He, G. Yuan, X. Wang and P. Xu.
Block coordinate descent methods for optimization under J-orthogonality constraints with applications, 2024.
Y. Cui, X. Wang and X. Xiao.
A two-phase stochastic momentum-based algorithm for nonconvex expectation-constrained optimization, 2024.
X. Yang, H. Wang, Y. Zhu and X. Wang.
Minimization over the nonconvex sparsity constraint using a hybrid first-order method, 2024.
J. Guo, X. Wang and X. Xiao.
Preconditioned primal-dual gradient methods for nonconvex composite and finite-sum optimization, 2023.
J. Guo, X. Wang and X. Xiao.
Dynamical convergence analysis of linearized proximal stochastic ADMM for nonconvex optimization, 2023.
Publication
H. Zheng, R. Wang, X. Wang and Q. Ling.
Can Fairness and Robustness Be Simultaneously Achieved Under Byzantine Attacks?.
ICASSP, 2025.
L. Jin and X. Wang.
Stochastic nested primal-dual method for nonconvex constrained composition optimization.
Mathematics of Computation, 94, 305-358, 2025.
Q. Shi, X. Wang and H. Wang.
A Momentum-based linearized augmented Lagrangian method for nonconvex constrained stochastic optimization.
Mathematics of Operations Research, 2025, https://doi.org/10.1287/moor.2022.0193.
X. Wang.
Complexity analysis of inexact cubic-regularized primal-dual algorithms for finding second-order stationary points.
Mathematics of Computation, 2024, https://doi.org/10.1090/mcom/4029.
Y. Lian, X. Wang, D. Xu and Z. Zhao.
Zeroth-order stochastic approximation algorithms for DR-submodular optimization.
Journal of Machine Learning Research, 25(391):1−55, 2024.
J. Ju, X. Wang and D. Xu.
Online nonmonotone DR submodular maximization in the bandit setting.
Journal of Global Optimization, 90:619–649, 2024.
X. Wang and X. Chen.
Complexity of finite-sum optimization with nonsmooth composite functions and non-Lipschitiz regularization.
SIAM Journal on Optimization, 34(3), 2472-2502, 2024.
Y. Lian, D. Du, X. Wang, D. Xu and Y. Zhou.
Stochastic variance reduction for DR-submodular maximization.
Algorithmica, 86:1335–1364, 2024.
J.N. Wang, X. Wang and L.W. Zhang.
A stochastic Newton method for nonlinear equations.
Journal of Computational Mathematics, 41, 1192-1221, 2023.
X. Wang.
Stochastic approximation methods for nonconvex constrained optimization (in Chinese).
Operations Research Transactions, 27(4),2023.
J.N. Wang, X. Wang and L.W. Zhang.
Stochastic regularized Newton methods for nonlinear equations.
Journal of Scientific Computing, 94(51), 2023.
W.Y. Cheng, X. Wang and X. Chen.
An interior stochastic gradient method for a class of non-Lipschitz optimization problems.
Journal of Scientific Computing, 92(42), 2022.
L. Jin and X. Wang.
A stochastic primal-dual method for a class of nonconvex constrained optimization.
Computational Optimization and Applications, 83, 143-180, 2022.
F. He, X. Wang and X. Chen.
A penalty relaxation method for image processing using Euler's Elastica Model.
SIAM Journal on Imaging Sciences, 14(1), 389-417, 2021.
X. Wang and H. Zhang.
Inexact proximal stochastic second-order methods for nonconvex composite optimization.
Optimization Methods and Software, 35(4), 808-835, 2020.
Y. Liu, X. Wang and T.D. Guo.
A linearly convergent stochastic recursive gradient method for convex optimization.
Optimization Letters, 14: 2265-2283, 2020.
X.Y. Wang, X. Wang and Y. Yuan.
Stochastic Proximal Quasi-Newton methods for Nonconvex Composite Optimization.
Optimization Methods and Software, 34: 922-948,2019.
X. Wang, S. Ma, D. Goldfarb and W. Liu.
Stochastic quasi-Newton methods for nonconvex stochastic optimization.
SIAM Journal on Optimization, 27(2), pp 927-956, 2017.
X. Wang, S. Ma and Y. Yuan.
Penalty methods with stochastic approximation for stochastic nonlinear programming.
Mathematics of Computation, 86, pp 1793-1820, 2017.
X. Wang, S. Wang and H. Zhang.
Inexact proximal stochastic gradient method for convex composite optimization.
Computational Optimization and Applications, 68: 579-618, 2017.
X. Wang and H. Zhang.
An augmented Lagrangian affine scaling method for nonlinear programming.
Optimization Methods and Software, 30(5), 934-964, 2015.
X. Wang and Y. Yuan.
An augmented Lagrangian trust region method for equality constrained optimization.
Optimization Methods and Software, 30(3), pp 559-582, 2015.
X. Liu, Z. Wen, X. Wang, M. Ulbrich and Y. Yuan.
On the Analysis of the Discretized Kohn-Sham Density Functional Theory.
SIAM Journal on Numerical Analysis, 53(4), pp 1758-1785, 2015.
X. Liu, X. Wang, Z. Wen and Y. Yuan.
On the convergence of the self-consistent field iteration in Kohn-Sham density functional theory.
SIAM Journal on Matrix Analysis and Applications, 35-2, pp. 546-558, 2014.
X. Wang, Y. Yuan.
A trust region method based on a new affine scaling technique for simple bounded optimization.
Optimization Methods and Software, 28(4), pp 871-888, 2013.
X. Wang.
A trust region affine scaling method for bound constrained optimization.
Acta Mathematica Sinica, English Series, 29(1), pp 159-182, 2013.
X. Huang, Z. Lei, M. Fan, X. Wang and S.Z. Li.
A Regularized Discriminative Spectral Regression Method for Heterogeneous Face Matching.
IEEE Transaction on Image Processing, 22(1), pp 353-362, 2013.
X. Wang.
An active set trust region method for general bound constrained optimization (in Chinese).
Sci Sin Math, 41(4): 377–391, 2011.
|