Schedule for: 17w5030  Splitting Algorithms, Modern Operator Theory, and Applications
Beginning on Sunday, September 17 and ending Friday September 22, 2017
All times in Oaxaca, Mexico time, CDT (UTC5).
Sunday, September 17  

14:00  23:59  Checkin begins (Front desk at your assigned hotel) 
19:30  22:00  Dinner (Restaurant Hotel Hacienda Los Laureles) 
20:30  21:30  Informal gathering (Hotel Hacienda Los Laureles) 
Monday, September 18  

07:30  08:45  Breakfast (Restaurant at your assigned hotel) 
08:45  09:00  Introduction and Welcome (Conference Room San Felipe) 
09:00  09:35 
Hedy Attouch: Rate of convergence of the Nesterov accelerated gradient method in the subcritical case $\alpha \leq 3$ ↓ (Based on a joint work with Z. Chbani and H. Riahi) (Conference Room San Felipe) In a Hilbert space setting $\mathcal{H}$, given $\Phi: \mathcal H \to \mathbb R$ a convex continuously differentiable function, and $\alpha$ a positive parameter, we first study the asymptotic behavior of the inertial system with Asymptotic Vanishing Damping $$ \mbox{(AVD)}_{\alpha} \quad \quad \ddot{x}(t) + \frac{\alpha}{t} \dot{x}(t) + \nabla \Phi (x(t)) =0, $$ and then the associated inertial algorithms. Depending on the value of $ \alpha $ with respect to 3, and based on new Lyapunov functions, we give a complete picture of the convergence properties as $t \to + \infty$ of the trajectories generated by $\mbox{(AVD)}_{\alpha}$. As shown by SuBoydCandès, the case $\alpha = 3$ corresponds to a continuous version of the accelerated gradient method of Nesterov, with the convergence rate $\Phi (x(t))\min \Phi = \mathcal O (t^{2})$. Our main result concerns the subcritical case $\alpha \leq 3$, where we show that $\Phi (x(t))\min \Phi = \mathcal O (t^{\frac{2}{3}\alpha})$. When $\alpha > 3$, we find the recent results by May and AChbaniPeypouquetRedont concerning the weak convergence of the trajectories, and the convergence of the values with the order $o\left(t^{2} \right)$. This overall picture shows a continuous variation of the rate of convergence of the values $\Phi(x(t))\min_\mathcal H \Phi= \mathcal O (t^{p(\alpha)}) $ with respect to $\alpha >0$: the coefficient $p(\alpha)$ increases linearly up to 2 when $\alpha$ goes from $0$ to $3$, then displays a plateau. We also consider the convergence of trajectories in the critical case $ \alpha = $ 3, with a positive response in some particular cases. Then we consider structured convex minimization problems of the form $\min \left\lbrace \Theta:= \Phi + \Psi \right\rbrace$, with $\Phi$ smooth and $\Psi$ nonsmooth. As a major property, the Lyapunov analysis of the continuous dynamics serves as a guideline for the study of the associated forwardbackward inertial algorithms. We obtain a similar convergence rate for the sequence of iterates $(x_k)$: for $\alpha < 3$ we have $\Theta (x_k)\min \Theta = \mathcal O (k^{p})$ for all $p <\frac{2\alpha}{3}$, for $\alpha = 3$ \ $\Theta (x_k)\min \Theta = \mathcal O (k^{2})$ (FISTA, BeckTeboulle, 2009), and for $\alpha > 3$ \ $\Theta (x_k)\min \Theta = o (k^{2})$ (APeypouquet, 2016). We conclude this study by showing that the results are robust with respect to external perturbations.

09:35  10:10 
James Burke: Iteratively reweighted lest squares and ADMM methods for solving affine inclusions ↓ We describe two matrixfree methods for solving
largescale affine inclusion problems on the product (or intersection) of convex sets.
The first approach is a novel iterative reweighting algorithm (IRWA) that iteratively minimizes quadratic models of relaxed subproblems while automatically updating a relaxation vector. The second approach is based on alternating direction augmented Lagrangian (ADAL) technology. The main computational costs of each algorithm are the repeated minimizations of convex quadratic functions which can be performed matrixfree. Both algorithms are globally convergent under loose assumptions, and each requires at most $O(1/\varepsilon^2)$ iterations to reach $\varepsilon$optimality of the objective function.
Numerical experiments show that both algorithms efficiently find inexact solutions.
However, in certain cases, these experiments indicate that IRWA can be significantly more efficient than ADAL. (Conference Room San Felipe) 
10:10  10:45 
Aris Daniilidis: On the GlaeserWhitney extension problem ↓ We give a simple proof and explicit formulae for the $C^{1,1}$convex extension problem
studied by D. Azagra and C. Mudarra. As an application, we obtain an easy constructive proof
for the GlaeserWhitney problem of $C^{1,1}$ extensions on a Hilbert space. (Conference Room San Felipe) This is a joint work with M. Haddou, O. Ley and E. Le Gruyer. 
10:45  11:15  Coffee Break (Conference Room San Felipe) 
11:15  11:50 
Yura Malitsky: Golden Ratio Algorithms for Variational Inequalities ↓ We present several novel methods for solving general (pseudo) monotone variational inequalities. The first method uses fixed stepsize and is similar to the proximal reflected gradient method: it also requires only one value of operator and one proxoperator per iteration. However, its extension — the dynamic version — has a notable distinction. In every iteration it defines a stepsize, based on a local information about operator, without running any linesearch procedure. Thus, the iteration costs of this method is almost the same as in the first one with a fixed stepsize, but it converges without the Lipschitz assumption on the operator. We further discuss possible generalizations of the methods, in particular for solving largescale nonlinear saddle point problems. Some numerical experiments are reported. (Conference Room San Felipe) 
11:50  12:25 
Robert Csetnek: ADMM for monotone operators: convergence analysis and rates ↓ We propose a unifying scheme for several algorithms from the
literature dedicated to the solving of monotone inclusion problems
involving compositions with linear continuous operators in infinite
dimensional Hilbert spaces. We show that a number of primaldual
algorithms for monotone inclusions and also the classical ADMM numerical
scheme for convex optimization problems, along with some of its
variants, can be embedded in this unifying scheme. While in the first
part of the talk convergence results for the iterates are reported, the
second part is devoted to the derivation of convergence rates obtained
by combining variable metric techniques with strategies based on a
suitable choice of dynamical step sizes. The talk is based on a joint
work with Radu Bot. (Conference Room San Felipe) 
12:25  13:00 
Scott Lindstrom: DouglasRachford Method for NonConvex Feasibility Problems ↓ The DouglasRachford method has been employed successfully to solve a variety of nonconvex feasibility problems. In particular, it shows surprising stability when applied to finding the intersections of hypersurfaces. We prove local convergence in the generalization of a case prototypical of the phase retrieval problem. In so doing, we also discover phenomena which may inhibit convergence. Finally we illustrate an application to solving boundary valued ordinary differential equations. (Conference Room San Felipe) This talk includes discoveries from three closely related works: 1. With Brailey Sims, Matthew Skerritt. ''Computing Intersections of Implicitly Specified Plane Curves.'' To appear in Journal of Nonlinear and Convex Analysis. 2. With Jonathan M. Borwein, Brailey Sims, Anna Schneider, Matthew Skerritt. ''Dynamics of the DouglasRachford Method for Ellipses and pSpheres.'' Submitted to Set Valued and Variational Analysis. 3. With Bishnu Lamichhane and Brailey Sims. ''Application of Projection Algorithms to Differential Equations: Boundary Value Problems,'' in preparation with plans to submit to ANZIAM Journal. 
13:20  13:30  Group Photo (Hotel Hacienda Los Laureles) 
13:30  15:00  Lunch (Restaurant Hotel Hacienda Los Laureles) 
15:00  15:35 
Asen Dontchev: The Inverse Function Theorems of Lawrence Graves ↓ The classical inverse/implicit function theorems revolves around solving an
equation in terms of a parameter and tell us
when the solution mapping associated with this equation
is a (differentiable) function.
Already in 1927 Hildebrandt and Graves observed that one can put aside differentiability
obtaining that the solution mapping is just Lipschitz continuous.
The idea has evolved in subsequent
extensions most known of which are various reincarnations of the
LyusternikGraves theorem. In the last several decades
it has been widely accepted that in order to derive estimates for the solution mapping
and put them in use for proving convergence of algorithms,
it is sufficient to differentiate what you can and leave the rest as is, hoping that the resulting problem
is easier to handle. More sophisticated results have
been obtained by employing various forms of metric regularity
of mappings acting in metric spaces, aiming at
applications to numerical analysis. (Conference Room San Felipe) 
15:35  16:10 
Elena Resmerita: Reconstruction of positive solutions for illposed problems ↓ This talk reviews several classes of methods for reconstructing positive solutions of inverse problems mostly by means of the Shannon entropy. While the literature on the finite dimensional setting is very rich, we focus on methods for problems in infinite dimensional Banach spaces and point out challenges raised in this context. (Conference Room San Felipe) 
16:10  16:30  Coffee Break (Conference Room San Felipe) 
16:30  17:05 
Anthony ManCho So: A Unified Approach to Error Bounds for Structured Convex Optimization Problems ↓ In this talk, we present a new framework for establishing error bounds for a class of structured convex optimization problems, in which the objective function is the sum of a smooth convex function and a general closed proper convex function. Such a class encapsulates fairly general constrained minimization problems, as well as various regularized loss minimization formulations in machine learning, signal processing, and statistics. Our framework not only allows us to recover a number of existing error bound results in a unified and transparent manner but also leads to new error bounds for problems whose objective functions do not necessarily have a polyhedral epigraph. As an illustration, we show that a class of nuclearnorm regularized loss minimization problems admits an error bound under a strict complementaritytype regularity condition. We then complement this result by constructing an example to show that the said error bound could fail to hold without the regularity condition. If time permits, we discuss the algorithmic implications of our results, particularly on the convergence rate analysis of a family of successive quadratic approximation methods for solving the aforementioned class of problems. (Conference Room San Felipe) 
17:05  17:40 
Panos Patrinos: Accelerated DouglasRachford splitting and ADMM for structured nonconvex optimization (joint work with Andreas Themelis, Lorenzo Stella) ↓ Although originally designed and analyzed for solving convex problems, the Alternating Direction Method of Multipliers (ADMM) and its close relatives, DouglasRachford Splitting (DRS) and PeacemanRachford Splitting (PRS), have been observed to perform remarkably well when applied to certain classes of structured nonconvex optimization problems. However, partial global convergence results in the nonconvex setting have only recently emerged. The goal of this work is twofold: First, we show how the DouglasRachford Envelope (DRE), introduced by the authors in 2014, can be employed to devise global convergence guarantees for ADMM, DRS, PRS applied to nonconvex problems under less restrictive conditions, larger prox stepsizes and overrelaxation parameters than what was previously known. Furthermore, exploiting properties of the DRE, we propose a new globally convergent algorithmic scheme that greatly accelerates convergence. For instance, the new scheme can accommodate for quasiNewton directions such as (L) BFGS. In fact, under mild assumptions, the algorithm is shown to converge superlinearly by performing exactly the same kind of computations as in ADMM, DRS, PRS. (Conference Room San Felipe) 
17:40  18:15 
Genaro Lopez: Modulus of regularity and rate of convergence for a Fejer monotone sequence ↓ Many problems in applied mathematics can be brought into the following format:
(Conference Room San Felipe) Let $(X,d)$ be a metric space and $F:X\to \mathbb{R}$ be a function: find a zero $z\in X$ of $F.$ This statement covers many equilibrium, fixed points and minimization problems. Numerical methods, e.g. based on suitable iterative techniques, usually yield sequences $(x_n)$ in $X$ of approximate zeros, i.e. $F(x_n)< 1/n.$ Based on extra assumptions (e.g. the compactness of $X,$ the Fejér monotonicity of $(x_n)$ and the continuity of $F$) one then shows that $(x_n)$ converges to an actual zero $z$ of $F.$ An obvious question then concerns the speed of the convergence of $(x_n)$ towards $z$ and whether there is an effective rate of convergence. Even though sometimes left implicit, the effectivity of iterative procedures in the case of unique zeros rests on the existence of an effective socalled modulus of uniqueness, see [Kohlenbach]. In this talk, we are concerned with a generalization of the concept of "modulus of uniqueness", called "modulus of regularity" which is applicable also in the nonunique case. While the concept of a modulus of regularity has been used in various special situations before (see e.g. [Anderson] and the literature cited there), we develop it here as a general tool towards a unified treatment of a number of concepts studied in convex optimization such as metric subregularity, H\"older regularity, weak sharp minima etc. which can be seen as instances of the concept of regularity w.r.t. $\text{zer}\;F$ for suitable choices of $F.$ This talk is based on a still in progress joint work with A. Nicolae and U. Kohlenbach. [Anderson] R.M. Anderson, "Almost" implies "Near", Trans. Amer. Math. Soc. 296 (1986), 229237. [Kohlenbach] U. Kohlenbach, Effective moduli from ineffective uniqueness proofs. An unwinding of de La Vallée Poussin's proof for Chebycheff approximation, Ann. Pure Appl. Logic 64 (1993), 2794. 
19:00  21:00  Dinner (Restaurant Hotel Hacienda Los Laureles) 
Tuesday, September 19  

07:30  09:00  Breakfast (Restaurant at your assigned hotel) 
09:00  09:35 
Patrick Combettes: Parallel, BlockIterative, PrimalDual Monotone Operator Splitting ↓ We propose new primaldual decomposition algorithms for solving
systems of inclusions involving sums of linearly composed maximally
monotone operators. At each iteration, only a subset of the monotone
operators needs to be processed, as opposed to all operators as in
established methods. Deterministic strategies are used to select the
blocks of operators activated at each iteration. In addition,
asynchronous implementation is allowed. (Conference Room San Felipe) The first method provides weakly convergent primal and dual sequences under general conditions, while the second is a variant in which strong convergence is guaranteed without additional assumptions. The novelty of this class of algorithms will be discussed and comparisons with the state of the art will be performed. Joint work with J. Eckstein. 
09:35  10:10 
Jonathan Eckstein: Asynchronous Parallel Applications of BlockIterative Splitting ↓ his talk discusses how the blockiterative monotone operator
splitting scheme the author developed with P. Combettes permits
asynchronous parallel implementation. It also presents a simplified
form of the blockiterative splitting scheme and discusses this
simplification's advantages and drawbacks. One application of this
simplification is a method that resembles the multiblock ADMM but is
fully asynchronous and converges without restrictive assumptions on
either the problem instance or stepsize parameters. The talk will
next describe problem formulation and parallel implementation
strategies for blockiterative splitting, as well as applications in
stochastic programming, electricity grid unit commitment, and
possibly "big data" analysis. (Conference Room San Felipe) 
10:10  10:45 
Juan Enrique Martinez Legaz: Minimization of quadratic functions on convex sets without asymptotes ↓ The classical Frank and Wolfe theorem states that a quadratic function which
is bounded below on a convex polyhedron $P$ attains its infimum on $P$. In
this joint work with D. Noll and W. Sosa we investigate whether more general
classes of convex sets can be identified which have this FrankandWolfe
property. We show that the intrinsic characterizations of FrankandWolfe
sets hinge on asymptotic properties of these sets. (Conference Room San Felipe) 
10:45  11:15  Coffee Break (Conference Room San Felipe) 
11:15  11:50 
Walaa Moursi: The DouglasRachford algorithm in the possibly inconsistent case ↓ The Douglas–Rachford algorithm is a very popular splitting technique
for finding a zero of the sum of two maximally monotone operators. The
behaviour of the algorithm remains mysterious in the general
inconsistent case, i.e., when the sum problem has no zeros. However,
more than a decade ago, it was shown that in the (possibly
inconsistent) convex feasibility setting, the shadow sequence remains
bounded and its weak cluster points solve a best approximation problem.
In this talk, we advance
the understanding of the inconsistent case significantly by presenting
a complete proof of the full weak convergence in the convex feasibility
setting. We also provide linear rate of convergence and strong
convergence in special cases. (Conference Room San Felipe) 
11:50  12:25 
Minh Dao: On the behavior of the DouglasRachford algorithm in possibly nonconvex settings ↓ The DouglasRachford algorithm is a popular iterative method for solving convex feasibility and optimization problems. It has been also successfully implemented to various nonconvex settings, while the rigourous theoretical justification is still far from complete. In this talk, we report our recent progress towards the behavior of this algorithm in the context of possibly nonconvex feasibility problems. New results on the rate of convergence together with global and periodic behaviors will be presented. (Conference Room San Felipe) 
12:25  13:00 
SorinMihai Grad: A forwardbackward method for solving vector optimization problems ↓ We present an iterative proximal inertial forwardbackward method with memory effects, based on recent advances in solving scalar convex optimization problems and monotone inclusions, for determining weakly efficient solutions to convex vector optimization problems consisting in vectorminimizing the sum of a differentiable vector function with a nonsmooth one, by making use of some adaptive linear scalarization techniques. During the talk, the difficulties encountered while formulating the algorithm and proving its convergence will be stressed, while the related (still unsolved) challenge of extending the celebrated FISTA method from scalar to vector optimization problems will be mentioned, too. The talk is based on joint work with Radu Ioan Boț. (Conference Room San Felipe) 
13:00  13:35 
Yaoliang Yu: On Decomposing the Proximal Map ↓ The proximal map has played a significant role in convex analysis and various splitting algorithms. For "simple" functions this proximal map is available in closedform while in general it needs to be computed by iterative numerical algorithms hence being inefficient. Motivated by applications in machine learning, we study when the proximal map of a sum of functions decomposes into the composition of the (simple) proximal maps of the individual summands. We present a simple sufficient condition and we demonstrate its surprising effectiveness by unifying and extending several results in seemingly unrelated fields. We end our discussion with a few open directions. (Conference Room San Felipe) 
13:35  15:00  Lunch (Restaurant Hotel Hacienda Los Laureles) 
15:00  15:35 
Isao Yamada: Hierarchical Convex Optimization with Proximal Splitting Operators ↓ The proximal splitting algorithms can iteratively approximate an unspecial vector among possibly infinitely many minimizers of a superposition of multiple nonsmooth convex functions. (Conference Room San Felipe) With elegant translations of the solution set, i.e., the set of all minimizers, into the fixed point sets of nonexpansive mappings, the hybrid steepest descent method allows further strategic selection of a most desirable vector among the solution set, by minimizing an additional convex function over the solution set. In this talk, we introduce fixed point theoretic interpretations of variety of proximal splitting algorithms and their enhancements by the hybrid steepest descent method with applications to recent advanced statistical estimation problems. 
15:35  16:10 
Evgeni Nurminski: Sharp Penalty Mapping Approach to Approximate Solution of Monotone Variational Inequalities ↓ We consider here a generalization of exact penalty functions approach to solution of variational
inequalities. Instead of more common potential mappings, associated with the gradient field of a
penalty function the oriented field of a sharp penalty mapping is considered. Linearly combined with
the variational inequality operator it directs the iteration method toward approximate feasibility
and optimality,which is unavoidably only approximate as the exact balance between these two acting
forces can not being attained beforehand. (Conference Room San Felipe) Nevertheless the accuracy of the limit points can be controlled by the parameters of the scheme and can be made arbitrary high and in the dynamic variants of this idea can even guarantee the exact solution of a monotone variational inequality. If one is content with the finite accuracy, the notion of sharp penalty mappings gives an additional freedom in construction of iteration schemes for approximate solution of variational inequalities with controllable accuracy. Due to some advantages of iteration methods, such as low memory requirements, data splitting, parallel execution, etc. it may be of interest for solution of variational inequalities of high dimension in particular for transportation equilibrium problems. Some part of the talk is devoted to a new approach for studying convergence properties of iteration processes arising from the above.Typically convergence proofs are based on Lyapunovlike statements about relaxation properties of a related sequences of values of convergence indicators. There are several general nontrivial methods for establishing such properties, but the analysis of algorithms for obtaining approximate solutions of optimization and equilibrium problems excludes the very idea of such convergence so new techniques used toward the validation of iteration methods were developed and will be discussed in this report. 
16:10  16:30  Coffee Break (Conference Room San Felipe) 
16:30  17:05 
Stephen Simons: Quasidense multifunctions ↓ Quasidensity is a concept that can be applied to subsets of $E \times E^*$, where $E$ is a nonzero real Banach space. Every closed quasidense monotone set is maximally monotone, but there exist maximally monotone sets that are not quasidense. The graph of the subdifferential of a proper, convex lower semicontinuous function on $E$ is quasidense. The graphs of certain subdifferentials of certain nonconvex functions are also quasidense. (This follows from joint work with Xianfu Wang.) The closed monotone quasidense sets have a number of very desirable properties, including a sum theorem and a parallel sum theorem, and so quasidensity satisfies the ideal calculus rules. We give five conditions equivalent to the statement that a closed monotone set be quasidense, but quasidensity seems to be the only concept of the six that extends easily to nonmonotone sets. There are also generalizations to general Banach spaces of the BrezisBrowder theorem on linear relations, but we will not discuss these in this talk. (Conference Room San Felipe) 
17:05  17:40 
Shawn Wang: A principle of finding the least norm solution for a sum of two maximally monotone operators ↓ The DouglasRachford splitting method plays an important role in optimization. We modify the method
so that it can be used to find the least norm solution and the least norm primaldual solution for a sum
of two maximally monotone operators. Consequences on minimizing a sum of two convex functions
will be discussed. (Conference Room San Felipe) 
17:40  18:15 
Radu Ioan Bot: A general doubleproximal gradient algorithm for d.c. programming ↓ The possibilities of exploiting the special structure of d.c. programs, which
consist of optimizing the difference of convex functions, are currently more
or less limited to variants of the DCA proposed by Pham Dinh Tao and Le
Thi Hoai An in 1997. These assume that either the convex or the concave
part, or both, are evaluated by one of their subgradients. (Conference Room San Felipe) In this talk we propose an algorithm which allows the evaluation of both the concave and the convex part by their proximal points. Additionally, we allow a smooth part, which is evaluated via its gradient. In the spirit of primaldual splitting algorithms, the concave part might be the composition of a concave function with a linear operator, which are, however, evaluated separately. For this algorithm we show that every cluster point is a solution of the optimization problem. Furthermore, we show the connection to the Toland dual problem and prove a descent property for the objective function values of a primaldual formulation of the problem. Convergence of the iterates is shown if this objective function satisfies the Kurdyka  Lojasiewicz property. In the last part, we apply the algorithm to an image processing model. The talk relies on a joint work with Sebastian Banert. 
19:00  21:00  Dinner (Restaurant Hotel Hacienda Los Laureles) 
Wednesday, September 20  

07:30  08:30  Breakfast (Restaurant at your assigned hotel) 
08:30  19:00  Excursion (Oaxaca) 
19:00  21:00  Dinner (Restaurant Hotel Hacienda Los Laureles) 
Thursday, September 21  

07:30  09:00  Breakfast (Restaurant at your assigned hotel) 
09:00  09:35 
Veit Elser: The flow limit of reflectreflectrelax ↓ In the reflectreflectaverage (RRA) iteration scheme a point is averaged with the product of its reflection in two constraint sets to produce the next point. It was noticed in a recent study of the bit retrieval problem that the relaxed variant of RRA, called RRR, has superior performance in the limit of zero relaxation parameter. In this limit RRR defines a continuous flow (a system of ODEs), where the flow field is formed by the difference of a pair of points on the constraint sets. The flow limit can be implemented very efficiently (eventchain dynamics) when the flow field is piecewise constant, as it is in some NPhard feasibility problems. I hope to have results for the Latin square completion problem in time for the workshop. (Conference Room San Felipe) 
09:35  10:10 
Samir Adly: On the proximity operator of the sum of two closed convex functions. Application to the sensitivity analysis of variational inequalities ↓ In this talkwe provide an explicit decomposition of the proximity operator of the sum of two proper, lower semicontinuous and convex functions. For this purpose, we introduce a new operator, called fproximity operator, generalizing the classical notion. After providing some properties and characterizations, we discuss the relations between the fproximity operator and the classical DouglasRachford operator. In particular we provide a oneloop algorithm allowing to compute numerically this new operator, and thus the proximity operator of the sum of two proper, lower semicontinuous and convex functions. Finally we illustrate the usefulness of our main result in the context of sensitivity analysis of variational inequalities of second kind in a Hilbert space. (Conference Room San Felipe) The talk is based on the following papers:

10:10  10:45 
Dominik Noll: Robust optimization to control uncertain systems ↓ A frequent quest in control engineering practice is to design feedback regulators for dynamical
systems featuring unknown parameters, or affected by other forms of uncertainty. Under the
worstcase paradigm this typically leads to robust optimization problems, which are nonconvex
and nonsmooth. On top, objectives may even be inaccessible to efficient computation, so that
relaxation has to be used. The challenge is then to relax without introducing conservatism. (Conference Room San Felipe) We show how variational analysis helps to deal with this difficult problem. We combine local optimization methods tailored to lowerC1 functions, others tailored to upperC1 functions, with global optimization to obtain robustness certificates. The resulting method avoids conservatism, gives robustness certificates, and is fast enough for control engineering practice. 
10:45  11:15  Coffee Break (Conference Room San Felipe) 
11:15  11:50 
Pontus Giselsson: Sharp Contraction Factors for the DouglasRachford Operator ↓ Recently, several local and global linear convergence rate results for Douglas–Rachford splitting have appeared in the literature. Many of these are derived under strong monotonicity, Lipschitz continuity, and/or cocoercivity assumptions, and most focus on the convex optimization setting in the dual DouglasRachford algorithm, i.e., the alternating direction method of multipliers. In the monotone inclusion setting for DouglasRachford splitting, Lions and Mercier showed a linear convergence rate bound under the assumption that one of the two operators is strongly monotone and Lipschitz continuous. In this talk, we show that this rate is not sharp, and present a sharp contraction factor for the DouglasRachford operator under the stated assumptions. We also present, for many cases sharp, contraction factors for the DouglasRachford operator under different combinations of strong monotonicity, Lipschitz continuity, and cocoercivity assumptions. If these stronger properties are split between the operators, our results rely on the introduced notions of negatively averaged operators and negatively linearly regular operators, meaning that the negative operator is averaged and linearly regular respectively. (Conference Room San Felipe) 
11:50  12:25 
Reinier Diaz Millan: A projection algorithm for nonmonotone variational inequalities ↓ We introduce and study the convergence properties of a projectiontype algorithm for solving the variational inequality problem for pointtoset operators. No monotonicity assumption is used in our analysis. The operator defining the problem is only assumed to be continuous in the pointtoset sense, i.e., inner and outersemicontinuous. Additionally, we assume nonemptiness of the socalled dual solution set. We prove that the whole sequence of iterates converges to a solution of the variational inequality. Moreover, we provide numerical experiments illustrating the behavior of our iterates. Through several examples, we provide a comparison with a recent similar algorithm. This talk is based on a joint work with R. Burachik. (Conference Room San Felipe) 
12:25  13:00 
Xiaoming Yuan: Partial error bound conditions and the linear convergence rate of ADMM ↓ In the literature, error bound conditions have been widely used for studying the linear convergence rates of various firstorder algorithms and the majority of literature focuses on how to sufficiently ensure these error bound conditions, usually posing more assumptions on the model under discussion. We focus on the alternating direction method of multipliers (ADMM), and show that the known error bound conditions for studying ADMM’s linear convergence rate, can indeed be further weakened if the error bound is studied over the specific iterative sequence generated by ADMM. A socalled partial error bound condition, which is tailored for the specific ADMM’s iterative scheme and weaker than known error bound conditions in the literature, is thus proposed to derive the linear convergence of ADMM. We further show that this partial error bound condition theoretically justifies the difference if the two primal variables are updated in different orders in implementing ADMM, which had been empirically observed in the literature yet no theory is known so far. (Conference Room San Felipe) 
13:00  13:35 
Yalcin Kaya: Optimal Control with Minimum Total Variation ↓ We consider optimal control problems where the aim is to minimize the total variation in the control variables in addition to a general objective functional, resulting in a multiobjective optimal control problem. We study an illustrative convex problem, which appears in a simple form, and obtain asymptotic results for the challenging case when only the total variation is to be minimized. We also discuss problems which are in a more general form. (Conference Room San Felipe) 
13:35  15:00  Lunch (Restaurant Hotel Hacienda Los Laureles) 
15:00  15:35 
Adrian Lewis: Error bounds and convergence of proximal methods for composite minimization ↓ Minimizing a simple nonsmooth outer function composed with a smooth inner map offers a versatile framework for structured optimization. A unifying algorithmic idea solves easy subproblems involving the linearized inner map and a proximal penalty on the step. I sketch a typical such algorithm  ProxDescent  illustrating computational results and representative basic convergence theory. Although such simple methods may be slow (without secondorder acceleration), eventual linear convergence is common. An intuitive explanation is a generic quadratic growth property  a condition equivalent to an "error bound" involving the algorithm's stepsize. The stepsize is therefore a natural termination criterion, an idea that extends to more general Taylorlike optimization models. (Conference Room San Felipe) Joint work with D. Drusvyatskiy (Washington), A. Ioffe (Technion), S.J. Wright (Wisconsin) 
15:35  16:10 
Francisco Javier Aragon Artacho: Modifying the DouglasRachford algorithm to solve best approximation problems ↓ In this talk we present a new iterative projection method for finding the closest point in the intersection of convex sets to any arbitrary point in a Hilbert space. This method, termed AAMR for averaged alternating modified reflections, can be viewed as an adequate modification of the DouglasRachford method that yields a solution to the best approximation problem. Under a constraint qualification at the point of interest, we show strong convergence of the method. In fact, the socalled strong CHIP fully characterizes the convergence of AAMR for every point in the space. We show some promising numerical experiments where we compare the performance of AAMR against other projection methods for finding the closest point in the intersection of pairs of finite dimensional subspaces. (Conference Room San Felipe) This talk is based on a joint work with Rubén Campoy. 
16:10  16:30  Coffee Break (Conference Room San Felipe) 
16:30  17:05 
Renata Sotirov: On Solving the Quadratic Shortest Path Problem ↓ The quadratic shortest path problem (QSPP) is the problem of finding a path in a directed graph
such that the sum of weights of arcs and the sum of interaction costs over all pairs of arcs
on the path is minimized. This problem is known to be NPhard. (Conference Room San Felipe) In this talk we consider two approaches for solving the problem. Our first approach is based on deriving a bound for the problem by splitting the quadratic objective of the QSPP into a linearizable and a nonlinearizable part. The linearizable part can be solved efficiently, and provides a lower bound for the original problem. Our second approach considers semidefinite programming relaxations for the QSPP that we solve by using the alternating direction method of multipliers. We also present our numerical results on solving the problem to optimality by using the mentioned bounds within the branch and bound framework. 
17:05  17:40 
D. Russell Luke: ProxToolbox ↓ Any casual observer of preprint servers like arXiv wil have noticed the weekly and sometimes daily appearance of
new firstoder methods for structured optimization and (convex )relaxations for popular problems. Absent from this surge in
new algorithmic strategies is any standard testbed of realworld problems for trustworthy comparisons. The phase retrieval problem
is one example of this sometimes absurd disconnect between proposed algorithms and the problems these routines are meant to solve. The ProxToolbox
started as an effort to establish reproducibility of my numerical experiments, not only for students and other researchers, but for myself twenty years hence. It has since developed into what I hope to convince you is a platform for testing and comparing proxbased methods on data sets, both simulated and experimental, from some of the more popular applications of first order methods. (Conference Room San Felipe) 
19:00  21:00  Dinner (Restaurant Hotel Hacienda Los Laureles) 
Friday, September 22  

07:30  09:00  Breakfast (Restaurant at your assigned hotel) 
09:00  09:35 
Max L.N. Gonçalves: Pointwise and ergodic convergence rates of a variable metric proximal ADMM ↓ Authors: Max L.N. Gonçalves, M. Marques Alves and Jefferson G. Melo (Conference Room San Felipe) In this talk, we discuss pointwise and ergodic convergence rates for a variable metric proximal alternating direction method of multiplicas (VMPADMM) for solving linearly constrained convex optimization problems. The VMPADMM can be seen as a class of ADMM variants, allowing the use of degenerate metrics (defined by noninvertible linear operators). We first propose and study nonasymptotic convergence rates of a variable metric hybrid proximal extragradient (VMHPE) framework for solving monotone inclusions. Then, the convergence rates for the VMPADMM are obtained essentially by showing that it falls within the latter framework. 
09:35  10:10 
Rafal Goebel: Pointwise Asymptotic Stability ↓ The talk presents some concepts and results from systems and control theory, focusing on convergence to and stability of a continuum of equilibria in a dynamical system. (Conference Room San Felipe) The wellstudied and understood asymptotic stability of a compact set requires Lyapunov stability of the set: solutions that start close remain close to the set, and that every solution converge to the set, in terms of distance. Pointwise asymptotic stability of a set of equilibria requires Lyapunov stability of each equilibrium, and that every solution converge to one of the equilibria. This property is present, for example, in continuoustime steepest descent and convergent saddlepoint dynamics, in optimization algorithms generating convergent Fejer monotone sequences, etc., and also in many consensus algorithms for multiagent systems. The talk will present some background on asymptotic stability and then discuss necessary and sufficient conditions for pointwise asymptotic stability in terms of setvalued Lyapunov functions; robustness of this property to perturbations; and how the property can be achieved in a control system by optimal control. 
10:10  10:45 
Regina Burachik: An approach for the convex feasibility problem via Monotropic Programming ↓ We use recent zero duality gap results arising from Monotropic Programming problem for analyzing consistency of the convex feasibility problem in Hilbert spaces. We characterize consistency in terms of the lower semicontinuity of the infimal convolution of the associated support functions. (Conference Room San Felipe) Based on joint work with Victoria MartínMárquez. 
10:45  11:15  Coffee Break (Conference Room San Felipe) 
11:15  11:50 
Jefferson Melo: Improved pointwise iterationcomplexity of a regularized ADMM ↓ Coauthored by Max L.N. Goncalves and Renato D.C. Monteiro (Conference Room San Felipe) In this talk, we present a regularized variant of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex programs. The pointwise iterationcomplexity of the new variant is better than the corresponding one for the standard ADMM method and, up to a logarithmic term, is identical to the ergodic iterationcomplexity of the latter method. We discuss how this regularized ADMM can be seen as an instance of a regularized hybrid proximal extragradient framework whose error condition at each iteration includes both a relative error and a summable error. 
11:50  12:25 
Andrzej Cegielski: Regular sequences of quasinonexpansive operators and their applications ↓ We present a systematic study of regular sequences of quasinonexpansive operators in Hilbert space. We are interested, in particular, in weakly, boundedly and linearly regular sequences of operators. We show that these types of the regularity is preserved under relaxations, convex combinations and products of operators. Moreover, in this connection, we show that the weak, bounded and linear regularity lead to weak, strong and linear convergence, respectively, of various iterative methods. This applies, in particular, to block iterative and string averaging projection methods, which, in principle, are based on the abovementioned algebraic operations applied to projections. Finally, we show an application of regular sequences of operators to variational inequality problems. (Conference Room San Felipe) 
12:25  13:00  Open Problems Session (tentatively) (Conference Room San Felipe) 
13:00  15:00  Lunch (Restaurant Hotel Hacienda Los Laureles) 