Schedule for: 21w5167 - Optimization under Uncertainty: Learning and Decision Making (Online)

Beginning on Monday, February 8 and ending Friday February 12, 2021

All times in Banff, Alberta time, MST (UTC-7).

Monday, February 8
08:45 - 09:00 Introduction and Welcome by BIRS Staff
A brief introduction to BIRS with important logistical information, technology instruction, and opportunity for participants to ask questions.
(Online)
09:00 - 09:45 Darinka Dentcheva: Subregular Recourse in Multistage Stochastic Optimization
We discuss nonlinear multistage stochastic optimization problems in the spaces of integrable functions. The problems may include nonlinear dynamics and general objective functionals with dynamic risk measures as a particular case. We present analysis about the causal operators describing the dynamics of the system and the Clarke subdifferential for a penalty function involving such operators. We introduce concept of a subregular recourse in nonlinear multistage stochastic optimization and establish subregularity of the resulting systems in two formulations: with built-in nonanticipativity and with explicit nonanticipativity constraints. Optimality conditions for both formulations and their relations will be presented. This is a joint work with Andrzej Ruszczynski, Rutgers University.
(Online)
09:45 - 10:00 Discussion / Break (Online)
10:00 - 10:45 Guzin Bayraksan: Data-driven sample average approximation with covariate information (Online)
10:45 - 11:00 Discussion / Break (Online)
11:00 - 11:45 Drew Kouri: A primal-dual algorithm for large-scale risk minimization
Many science and engineering applications necessitate the optimization of systems described by partial differential equations (PDEs) with uncertain inputs including noisy physical parameters, unknown boundary or initial conditions, and unverifiable modeling assumptions. One can formulate such problems as risk-averse optimization problems in Banach space, which upon discretization, become enormous risk-averse stochastic programs. For many popular risk models including the coherent risk measures, the resulting risk-averse objective function is not differentiable. This lack of differentiability complicates the numerical approximation of the objective function as well as the numerical solution of the optimization problem. To address these challenges, I present a general primal-dual algorithm for solving large-scale nonsmooth risk-averse optimization problems. This algorithm is motivated by epigraphical regularization of risk measures and is closely related to the classical method of multipliers. The algorithm solves a sequence of smooth optimization problems using derivative-based methods and is provably convergent even when the subproblem solves are performed inexactly. I conclude my presentation with multiple PDE-constrained examples that demonstrate the efficiency of this method.
(Online)
11:45 - 12:00 Discussion / Break (Online)
12:00 - 12:45 Youssef Marzouk: Transport methods for likelihood-free inference and data assimilation (Online)
12:45 - 13:00 Discussion / Break (Online)
Tuesday, February 9
09:00 - 09:45 Georg Stadler: Optimal control of PDEs under uncertainty with joint chance state constraints (Online)
09:45 - 10:00 Discussion / Break (Online)
10:00 - 10:45 Peng Chen: Taylor approximation for PDE and chance constrained optimization under uncertainty (Online)
10:45 - 11:00 Discussion / Break (Online)
11:00 - 12:00 Discussion Computational Methods
The panel will discuss current developments and challenges in computational methods for optimization under uncertainty. This includes methods used in machine learning, stochastic programming, PDE-constrained optimization under uncertainty, and inverse problems in uncertainty quantification. The panel will address questions related to the ideal methods for a given field, how to treat uncertainty in practice, and give some insight on the interpretation of solutions. The audience is highly encouraged to pose additional questions and discuss directly with the panelists. Panelists: Darinka Dentcheva, Güzin Bayraksan, Drew P. Kouri.
(Online)
Wednesday, February 10
09:00 - 09:45 Bamdad Hosseini: Conditional Sampling with Monotone GANs: Modifying Generative Models to Solve Inverse Problems (Online)
09:45 - 10:00 Discussion / Break (Online)
10:00 - 10:45 Eldad Haber (Online)
10:45 - 10:55 Discussion / Break (Online)
10:55 - 11:00 Group Photo Session (Online)
11:00 - 12:00 Discussion PDEs and machine learning
The panel will discuss current developments at the interface of machine learning and partial differential equations (PDE). Recent years have seen increased activity in this area including new machine learning approaches using PDE techniques and machine learning approaches for solving PDE problems. The panel will discuss research potential, possible pitfalls, and the impact of those activities on education and training. The panel will also encourage questions from the audience.
(Online)
Thursday, February 11
09:00 - 09:45 Bodhisattva Sen: Nonparametric maximum likelihood estimation in heteroscedastic mixture models: density estimation, denoising and deconvolution (Online)
09:45 - 10:00 Discussion / Break (Online)
10:00 - 10:45 Masoumeh Dashti: Posterior consistency in Bayesian inference with exponential priors (Online)
10:45 - 11:00 Discussion / Break (Online)
11:00 - 12:00 Discussion Statistical Learning in the Context of Shape Constraints, Model Information and Rare Events
The panel will discuss the role of non-data information about a random phenomenon and how such information can improve predications as well as enable extrapolate beyond the range of available data. Through concrete examples, the panel will identify situations in which non-data information can easily be combined with data in statistical (learning) models. The panel will also discuss emerging challenges and take questions from the audience. Panelists: Omar Ghattas, Youssef M Marzouk, Bodhi Sen
(Online)
Friday, February 12
09:00 - 09:45 Philip Thompson: Robust Regression (Online)
09:45 - 10:00 Discussion / Break (Online)
10:00 - 10:45 Bart van Bloemen Waanders: Hyper-differential sensitivity analysis for control under uncertainty of aerospace vehicles (Online)
10:45 - 11:00 Discussion / Break (Online)
11:00 - 11:45 Elizabeth Newman: Train Like a (VarPro): Efficient Training of Neural Networks with Variable Projection (Online)
11:45 - 12:00 Discussion / Break (Online)