TY - ADVS
T1 - Z-ProxSG
T2 - Metatuning Software
A2 - Pougkakiotis, Spyridon
N1 - Copyright (c) 2022 spougkakiotis
PY - 2022
Y1 - 2022
N2 - A zeroth-order proximal stochastic gradient method for non-smooth and non-convex optimization. This file is used to derive the numerical results in the following paper: A Zeroth-order Proximal Stochastic Gradient Method for Weakly Convex Stochastic Optimization Spyridon Pougkakiotis, Dionysios S. Kalogerias, https://arxiv.org/abs/2205.01633It contains two zeroth-order proximal stochastic gradient methods, a proximal stochastic sub-gradient method, a stochastic proximal point method, as well as a proximal alternating direction method of multipliers suitable for the solution of ℓ1-regularized convex quadratic programming. Two experiments are set up: The solution of randomly generated phase retrieval instances (weakly convex problems)The hyper-parameter tuning of the proximal ADMM for the solution of L1-regularized PDE-constrained optimization problems. Each file is heavily commented for the convenience of the user. Use help "fuction of interest" to learn more about each of the included files. The proximal ADMM and the PDE-problems solved in the second experiment are taken from the following work: A semismooth Newton-proximal method of multipliers for ℓ1-regularized convex quadratic programming Spyridon Pougkakiotis, Jacek Gondzio, https://doi.org/10.48550/arXiv.2201.10211
AB - A zeroth-order proximal stochastic gradient method for non-smooth and non-convex optimization. This file is used to derive the numerical results in the following paper: A Zeroth-order Proximal Stochastic Gradient Method for Weakly Convex Stochastic Optimization Spyridon Pougkakiotis, Dionysios S. Kalogerias, https://arxiv.org/abs/2205.01633It contains two zeroth-order proximal stochastic gradient methods, a proximal stochastic sub-gradient method, a stochastic proximal point method, as well as a proximal alternating direction method of multipliers suitable for the solution of ℓ1-regularized convex quadratic programming. Two experiments are set up: The solution of randomly generated phase retrieval instances (weakly convex problems)The hyper-parameter tuning of the proximal ADMM for the solution of L1-regularized PDE-constrained optimization problems. Each file is heavily commented for the convenience of the user. Use help "fuction of interest" to learn more about each of the included files. The proximal ADMM and the PDE-problems solved in the second experiment are taken from the following work: A semismooth Newton-proximal method of multipliers for ℓ1-regularized convex quadratic programming Spyridon Pougkakiotis, Jacek Gondzio, https://doi.org/10.48550/arXiv.2201.10211
UR - https://github.com/spougkakiotis/Z-ProxSG
M3 - Software
PB - GitHub
ER -