Reduced Basis Methods: A Tutorial Introduction for Stationary
Transcrição
Reduced Basis Methods: A Tutorial Introduction for Stationary
Reduced Basis Methods: A Tutorial Introduction for Stationary Coercive Parametrized PDEs RB Summerschool TU München 16.-19. Sept. 2013 Bernard Haasdonk Universität Stuttgart Institut für Angewandte Analysis und Numerische Simulation [email protected] Institut für Angewandte Analysis und Numerische Simulation Overview Part 1 Introduction Model Problem Uniform coercivity, continuity, parameter separability Full problem, solution manifold, examples, regularity RB Problem Thermal Block, solution structure Abstract Problem Motivation of Model Reduction Basic Idea and Notions in RB-methods „Primal“ formulation, error bounds, effectivities Experiments 16.9.2013 B. Haasdonk, RB-Tutorial 2 Institut für Angewandte Analysis und Numerische Simulation Overview Part 2 Offline/Online Decomposition Basis Generation Lagrangian Basis Greedy, Convergence Rates, Orthonormalization Adaptivity Primal-Dual RB Approach RB-Problem, Error Estimators Min-theta Output Correction Improved Error Estimation Conclusion/Extensions 16.9.2013 B. Haasdonk, RB-Tutorial 3 Introduction Institut für Angewandte Analysis und Numerische Simulation Motivation of Model Reduction Today: High resolution simulation schemes Multitude of applications High dimensional models (PDEs, ODEs) Development of accurate schemes Adaptive grids, higher order schemes Parallelization and HPC High runtime- and hardware requirements Goal: Reduced models Smaller model dimension, reduced requirements Similar precision, error control Automatic reduction, not „manual“ Realization of complex simulation scenarios Multi-query, real-time, „Cool“-computing platforms 16.9.2013 B. Haasdonk, RB-Tutorial 5 Institut für Angewandte Analysis und Numerische Simulation Motivation of Model Reduction „Real Time“ Scenarios Real-time control of processes Graphical user interfaces Man-machine-interaction Interactive design Parameter exploration „Cool“ Computing Platforms 16.9.2013 Simple industrial controllers Web-applications / Applets Ubiquitous Computing: Mobile phone, smart devices B. Haasdonk, RB-Tutorial 6 Institut für Angewandte Analysis und Numerische Simulation Motivation of Model Reduction „Multi-Query“, High-Level Simulation Scenarios Parameter studies, statistical investigations Design, Parameter optimization, inverse problems Multiscale Settings: Reduced Models as Microsolvers Homogenized Problem CP CP CP CP CP CP Homogenized Problem CP RP RP RP RP RP RP RP Stochastic PDEs: Monte Carlo with Reduced Models SPDE ū(x) := 16.9.2013 u(x, ω) ūn (x) = 1 ( n RP + RP + ... + RP Ω u(x, ω)p(ω) B. Haasdonk, RB-Tutorial 7 ) Institut für Angewandte Analysis und Numerische Simulation Motivation of Model Reduction Offline/Online Computational Procedure Accept computationally intensive „offline phase“ (reduced model generation, etc.) Amortization of runtime cost in view of multiple online phases i.e. simulations with reduced model Multi-query with high dimensional model: ... u(µ1 ) u(µ2 ) time u(µ3 ) Multi-query with reduced model: offline phase online phase uN (µ1 ), . . . , uN (µk ) 16.9.2013 B. Haasdonk, RB-Tutorial time 8 Institut für Angewandte Analysis und Numerische Simulation Motivation of RB-Methods Parametric problems: Parameter domain P ⊂ Rp , parameter vector µ ∈ P solution u(µ) ∈ X , Hilbert space (HS) Manifold of solutions M „parametrized“ by µ ∈ P Low-dimensional subspace XN ⊂ X („RB-Space“) Approximation uN (µ) ∈ XN and error bound ∆u (µ) X XN M u(µ1 ) 16.9.2013 uN (µ) uN (µ) u(µ2 ) u(µ) B. Haasdonk, RB-Tutorial } ∆u (µ) u(µ) 9 Institut für Angewandte Analysis und Numerische Simulation Motivation of RB-Methods Simple Example: µ ∈ P = [0, 1] Find u(µ) ∈ C 2 ([0, 1]) (not a HS) satisfying (1 + µ)u = 1 in (0, 1), u(0) = u(1) = 1 „Snapshots“: u0 := u(µ = 0) = 12 x2 − 12 x + 1 u1 := u(µ = 1) = 14 x2 − 14 x + 1 XN = span{u0 , u1 } Reduced Solution uN (µ) = α0 (µ)u0 + α1 (µ)u1 α0 (µ) = 16.9.2013 2 µ+1 − 1, α1 (µ) = 2 − 2 µ+1 Exact approximation: uN (µ) = u(µ) for µ ∈ P M is contained in 2-dimensional subspace (more precisely: M is convex hull of u0 , u1 ) B. Haasdonk, RB-Tutorial 10 Institut für Angewandte Analysis und Numerische Simulation Motivation of RB-Methods Questions that need to be addressed: 16.9.2013 How to construct good spaces XN? Can such „procedures“ be provably good? How to obtain approximation uN (µ) ∈ XN ? Can we do better than interpolation? Efficiency: How can uN (µ) be computed rapidly? Stability with growing N? Can we bound the error? Are bounds „rigorous“, i.e. provable upper bounds? Are error bounds largely overestimating the error or can the „effectivity“ be bounded? For which problem classes is low dimensional approximation expected to be successful? B. Haasdonk, RB-Tutorial 11 Institut für Angewandte Analysis und Numerische Simulation Motivation of RB-Methods General References on the Topic: Electronical Book: [PR06] A.T. Patera and G. Rozza: „Reduced Basis Approximation and A Posteriori Error Estimation for Parametrized Partial Differential Equations, Version 1.0, Copyright MIT 2006, to appear in (tentative rubric) MIT Pappalardo Graduate Monographs in Mechanical Engineering. RB-Skript: [Ha11] B. Haasdonk: Reduzierte-Basis-Methoden, Vorlesungsskript SS 2011, IANSReport 4/11, University of Stuttgart, 2011. Websites: augustine.mit.edu: MIT-website www.morepas.org: german RB activities www.modelreduction.org: german MOR Wiki Software: rbMIT: http://augustine.mit.edu RBmatlab, Dune-rb: www.morepas.org Course Material: www.haasdonk.de/data/summerschool2013 16.9.2013 B. Haasdonk, RB-Tutorial 12 Institut für Angewandte Analysis und Numerische Simulation Software RBmatlab MATLAB discretization and RB-library 2d-Grids, adaptive n-D grids Linear, Nonlinear Evolution Problems FV, FEM, LDG Discretizations, RB Algorithms Download & Documentation: www.morepas.org DUNE-RB Detailed Parametrized Models, C++ Template lib. Extension of Dune-FEM (www.dune-project.org) Discrete Function Lists, Parametrized Operators Interface to RBmatlab 16.9.2013 B. Haasdonk, RB-Tutorial 13 Model Problem: Thermal Block Institut für Angewandte Analysis und Numerische Simulation Model Problem Thermal Block p i=1 Ωi Ω1 Ω2 ΓN1 B 1 = B2 = 2 Parameters: heat conductivity coefficients µmin = Governing PDE −∇ · k(µ)∇u = 0 u=0 k(µ)∇u · n = i 16.9.2013 Ω4 ΓN0 p := B1 · B2 µ = (µi )pi=1 ∈ [µmin , µmax ]p , Ω3 Slight modification of [PR06] Heat conduction in solid block Computational domain Ω = (0, 1)2 Partition in B1 horiz., B2 vert. subblocks Ω= ΓD in Ω on ΓD on ΓNi , 1 µmax ∈ (0, 1) k(x; µ) = i µi χΩi (x) i = 0, 1 B. Haasdonk, RB-Tutorial 15 Institut für Angewandte Analysis und Numerische Simulation Model Problem Weak Form: Solution space X = HΓ1D (Ω) := { v ∈ H 1 (Ω) | v|ΓD = 0} Weak form: find u(µ) ∈ X such that k(µ)∇u(µ) · ∇v = v, N1 Ω Γ a(u(µ),v;µ) v∈X f (v;µ) Possible output of interest: s(µ) := u(x; µ)dx = l(u(µ); µ) ΓN,1 Compactly written by means of bilinear form a(·, ·; µ) and linear forms f(·; µ), l(·; µ) ∈ X 16.9.2013 B. Haasdonk, RB-Tutorial 16 Institut für Angewandte Analysis und Numerische Simulation Model Problem Solution Variety: Simple solution structure: if B1 = 1 (or B1 ≥ 1 and all µi in each row identical) the solution exhibits horizontal symmetry, is piecewise linear, can be exactly represented in a finite dimensional space, although the full problem is infinite dimensional. Exercise 1: Find and prove an explicit solution representation in a B2 -dimensional linear space 16.9.2013 B. Haasdonk, RB-Tutorial µ1 = µ2 = µ3 = µ4 µ1 = µ2 = / µ3 = µ4 17 Institut für Angewandte Analysis und Numerische Simulation Model Problem Solution Variety: Complex solution structure: if B1 > 1 the solution is in general nonsymmetric, complexity increasing µ1 = / µ2 = / µ3 = / µ4 with B1 , B2 Parameter redundancy: manifold is invariant with respect to scaling of the parameter vector: µ̄ := cµ ∈ P, c > 0 ⇒ u(µ̄) = 1c u(µ). Important insight: More/many parameters do not necessarily imply complex manifold structure Exercise 2: Provide a different parametrization of k(x; µ) in the thermal block, such that the model has arbitrary large number p > B1 · B2 of parameters, but only 1-dimensional solution manifold. 16.9.2013 B. Haasdonk, RB-Tutorial 18 Abstract Problem Institut für Angewandte Analysis und Numerische Simulation Abstract Problem Notation X Hilbert space (real, separable), scalar product ·, · , norm v := v, v, v ∈ X Dual space X with norm gX g(v) , := sup v∈X\{0} v For all g ∈ X denote Riesz-Representer by vg ∈ X : g(v) = vg , v , gX = vg v∈X (Representer property) (Isometry of Riesz-map) Parameter domain P ⊂ Rp bilinear form and linear forms a(·, ·; µ) : X × X → R 16.9.2013 g ∈ X f(·; µ), l(·; µ) ∈ X , B. Haasdonk, RB-Tutorial µ∈P 20 Institut für Angewandte Analysis und Numerische Simulation Abstract Problem (A1): Uniform Boundedness and Coercivity of a(·, ·; µ) a(·, ·; µ) is assumed to be coercive, i.e. a(v, v; µ) >0 α(µ) := inf 2 v∈X\{0} u and the coercivity is uniform wrt. µ , i.e. there exists ᾱ with α(µ) ≥ ᾱ > 0, µ ∈ P. a(·, ·; µ) is assumed to be bounded (continuous), i.e. a(u, v; µ) γ(µ) := sup <∞ u,v∈X\{0} u , v and boundedness is uniform wrt. µ , i.e. there exists a γ̄ s.th. γ(µ) ≤ γ̄ < ∞, µ ∈ P. Remark: a(·, ·; µ) may possibly be nonsymmetric 16.9.2013 B. Haasdonk, RB-Tutorial 21 Institut für Angewandte Analysis und Numerische Simulation Abstract Problem (A2): Uniform Boundedness of f(·; µ), l(·; µ) f(·; µ), l(·; µ) are assumed to be uniformly bounded wrt. µ : f(·; µ)X ≤ γ̄f , l(·; µ)X ≤ γ̄l , µ ∈ P. for suitable constants γ̄l , γ̄f Remark: Possible Discontinuity wrt. µ Example: X = R, P := [0, 2] l(x; µ) := x · χ[1,2] (µ) l(·; µ) is linear and bounded, hence a continuous linear functional with respect to x, but it is discontinuous with respect to µ 16.9.2013 B. Haasdonk, RB-Tutorial 22 Institut für Angewandte Analysis und Numerische Simulation Abstract Problem (A3): Parameter Separability We assume the forms a, f, l to be parameter separable: a(u, v; µ) = Qa θqa (µ)aq (u, v), q=1 u, v ∈ X, µ ∈ P for suitable bilinear, continuous components aq : X × X → R coefficient functions θqa : P → R, q = 1, . . . , Qa , and similar expansions for f, l with linear functionals fq , lq and coefficient functions θqf , θql and expansion sizes Qf , Ql Remark: 16.9.2013 Qa , Qf , Ql should be preferably small, as they will enter the online computational complexity. This property also is referred to as „affine“ parameter dependence (which is slightly misleading) B. Haasdonk, RB-Tutorial 23 Institut für Angewandte Analysis und Numerische Simulation Abstract Problem Sufficient Criteria for (A1), (A2) Assume that we have parameter separability (A3) then a f l If coefficient functions θq , θq , θq are bounded, then the forms a, f, l are uniformly bounded with respect to µ : |θqf (µ)| ≤C ⇒ f(·; µ)X ≤ Qf q=1 C fq X =: γ̄f If coefficient functions are strictly positive, θqa (µ) ≥ θ̄ > 0, ∀µ, q components aq are positive semidefinite, aq (v, v) ≥ 0, ∀v, q and a(·, ·; µ̄) is coercive for at least one µ̄ ∈ P , then a is uniformly coercive wrt. µ Exercise 3: Prove sufficient criteria for uniform coercivity 16.9.2013 B. Haasdonk, RB-Tutorial 24 Institut für Angewandte Analysis und Numerische Simulation Abstract Problem Definition: Full Problem (P) For µ ∈ P find a solution u(µ) ∈ X and output s(µ) ∈ R such that a(u(µ), v; µ) = f(v; µ), ∀v ∈ X s(µ) = l(u(µ); µ) Well-posedness: Existence, Uniqueness & Boundedness Assuming (A1),(A2) then a unique solution of (P) exists and is uniformly bounded f(·; µ)X γ̄f u(µ) ≤ ≤ , α(µ) ᾱ 16.9.2013 |s(µ)| ≤ l(·; µ)X u(µ) ≤ γ̄l γ̄f . ᾱ Proof: Lax Milgram & uniform boundedness/coercivity B. Haasdonk, RB-Tutorial 25 Institut für Angewandte Analysis und Numerische Simulation Abstract Problem (P) Can both represent analytical problem, infinite dimensional (interesting from approximation theoretic viewpoint, manifold properties) discretized problem, high dimensional (important for practical application of RB-methods), also denoted „detailed problem“ and „detailed solution“ Examples of Instantiations of (P): Thermal Block Exercise 4: Prove, that the bilinear and linear forms of the thermal block model are separable parametric, uniformly bounded and uniformly coercive. In particular, provide the corresponding constants, coefficients, components. 16.9.2013 B. Haasdonk, RB-Tutorial 26 Institut für Angewandte Analysis und Numerische Simulation Abstract Problem Examples of Instantiations of (P) Parametric Matrix-Equation: For µ ∈ P find a solution u(µ) ∈ RH of A(µ)u(µ) = b(µ), A(µ) ∈ RH×H , b(µ) ∈ RH Corresponds to (P) by choosing X := RH , a(u, v; µ) := uT A(µ)v, Note: 16.9.2013 u, v ∈ RH Forms by given manifold: Choose X and arbitrary complicated (discontinuous, nonsmooth) u : P → X. Then u(µ) is the solution of (P) by a(v, v ; µ) := v, v f(v) := b(µ)T v, f(v) := u(µ), v v, v ∈ X (A1)-(A3) are not addressed here, output is ignored (P) can be used for MOR of finite dimensional matrix equations, (P) may have arbitrary complex solutions B. Haasdonk, RB-Tutorial 27 Institut für Angewandte Analysis und Numerische Simulation Abstract Problem Solution Manifold M := {u(µ), |u(µ) solves (P) , µ ∈ P} ⊂ X Finite dimensional manifold for Qa = 1 Exercise 5: If a consists of a single component, Qa = 1 show, that M is contained in an (at most) Qf -dimensional linear space. Boundedness of Manifold M ⊆ B γ̄f (0) ᾱ 16.9.2013 Is consequence of the well-posedness-result. B. Haasdonk, RB-Tutorial 28 Institut für Angewandte Analysis und Numerische Simulation Abstract Problem Lipschitz-Continuity (extension of [EPR10]) Assume that (A1),(A2),(A3) hold and additionally the coefficient functions are Lipschitz-continuous, |θqa (µ) − θqa (µ )| ≤ L µ − µ etc. Then the forms a, f, l are Lipschitz-continuous wrt. µ |a(u, v; µ) − a(u, v; µ )| ≤ La u v µ − µ , La = L q γaq and the solutions u and s are Lipschitz-continuous with respect to µ u(µ) − u(µ ) ≤ Lu µ − µ , Lu = Lf ᾱ s(µ) − s(µ ) ≤ Ls µ − µ , Ls = Ll γ̄f ᾱ + γ̄f La ᾱ2 + γ̄l Lu Exercise 6: Prove the Lipschitz-constants for u and s. 16.9.2013 B. Haasdonk, RB-Tutorial 29 Institut für Angewandte Analysis und Numerische Simulation Abstract Problem Differentiability (cf. [PR06]) Assume that (A1),(A2),(A3) hold and additionally the coefficient functions are differentiable wrt. µ . Then the solution u : P → X is differentiable with respect to µ and the partial derivatives ∂µi u(µ) ∈ X are the solution of a(∂µi u(µ), v; µ) = f˜i (v; u(µ), µ), (∗) with u-dependent right hand side v∈X Qf Qa f˜i (·; u(µ), µ) := (∂µi θqf (µ))fq (·) − (∂µi θqa (µ))aq (u(µ), ·; µ) ∈ X . q=1 16.9.2013 q=1 Proof (sketch): Solution of (*) uniquely exists with Lax Milgram, and satisfies conditions for being derivative of u. B. Haasdonk, RB-Tutorial 30 Institut für Angewandte Analysis und Numerische Simulation Abstract Problem Remarks 16.9.2013 The partial derivatives are also denoted „sensitivity derivatives“ and the variational problem (*) the „sensitivity PDE“. Similar statements are possible for higher order derivatives: right hand side of sensitivity PDE depending on lower order derivatives. Sensitivity derivatives are useful for Parameter Optimization: RB model for sensitivity PDEs yields gradient information [DH13,DH13b]. The more smooth the coefficient functions, the more smooth the solution manifold With increasing smoothness of the manifold, we may hope and expect better approximability by an RB-approach. B. Haasdonk, RB-Tutorial 31 RB Method Institut für Angewandte Analysis und Numerische Simulation RB Method Reduced Basis / RB-Space Let parameter samples be given SN = {µ(1) , . . . , µ(N) } ⊂ P Define „Lagrangian“ RB-Space and Basis XN := span{u(µ(i) )}N i=1 = spanΦN , ΦN := {ϕ1 , . . . , ϕN } Remarks: RB may be identical to snapshots, or orthogonalized. Other MOR-Techniques: A RB-space may also be chosen completely different/arbitrary, as long as it is a N-dimensional subspace: Proper Orthogonal Decomposition (POD) [Vo13], Balanced Truncation, Krylov-Supspaces, etc. [An05] For now: Simple choice of samples: Random or equidistant samples, assuming linear independence of snapshots. Later: More clever choice: a-priori analysis / greedy 16.9.2013 B. Haasdonk, RB-Tutorial 33 Institut für Angewandte Analysis und Numerische Simulation RB Method Definition: Reduced Problem (PN) For µ ∈ P find a solution uN (µ) ∈ XN and output sN (µ) ∈ R such that a(uN (µ), v; µ) = f(v; µ), ∀v ∈ XN sN (µ) = l(uN (µ); µ) Remarks: The above is called „Galerkin“ projection, as Ansatz and test space are identical (in contrast to „Petrov-Galerkin“ required for non-coercive problems) Improved output estimation is possible by primal-dual technique: see later section. „Galerkin Orthogonality“: Error is a-orthogonal to RB-space: a(u − uN , v) = a(u, v) − a(uN , v) = f(v) − f(v) = 0, 16.9.2013 B. Haasdonk, RB-Tutorial v ∈ XN 34 Institut für Angewandte Analysis und Numerische Simulation RB Method Well-posedness: Existence, Uniqueness & Boundedness Identical statement as for (P), even with same constants: Assuming (A1),(A2), then a unique solution of (PN) exists, and is uniformly bounded f(·; µ)X γ̄f uN (µ) ≤ ≤ , α(µ) ᾱ γ̄l γ̄f |sN (µ)| ≤ l(·; µ)X u(µ) ≤ . ᾱ Proof: Lax-Milgram is applicable, as continuity and coercivity is inherited to subspaces: a(u, u; µ) a(u, u; µ) inf ≥ inf = α(µ) 2 2 u∈XN \{0} u∈X\{0} u u a(u, v; µ) a(u, v; µ) sup ≤ sup = γ(µ) u,v∈XN \{0} u v u,v∈X\{0} u v then same argumentation as for (P) applies. 16.9.2013 B. Haasdonk, RB-Tutorial 35 Institut für Angewandte Analysis und Numerische Simulation RB Method Discrete Form of RB Problem For given µ ∈ P and basis ΦN = {ϕi }N i=1 define N×N AN (µ) := (a(ϕj , ϕi ; µ))N i,j=1 ∈ R f N (µ) := (f(ϕi ; µ))N i=1 , N lN (µ) := (l(ϕi ; µ))N i=1 ∈ R N ∈ R Solve the following linear system for uN (µ) = (uNj )N j=1 AN (µ)uN (µ) = f N (µ) Then the solution of (PN) is obtained by uN (µ) = N uNj ϕj , sN (µ) = l(µ)T uN (µ) j=1 16.9.2013 Proof: This representation of uN (µ) fulfills (PN) by linearity B. Haasdonk, RB-Tutorial 36 Institut für Angewandte Analysis und Numerische Simulation RB Method Algebraic Stability by Using Orthonormal Basis If a(·, ·; µ) is symmetric and ΦN is orthonormal, then the condition number of AN (µ) is bounded (independent of N) γ(µ) −1 ≤ cond2 (AN (µ)) = AN (µ) AN (µ) α(µ) Proof: symmetry ⇒ cond2 (AN ) = λmax /λmin N N Let u = (ui )i=1 be EV for λmax and set u := i=1 ui ϕi ∈ X Orthonormality yields 2 2 2 u = u ϕ , u ϕ = u u ϕ , ϕ = u = u i i j j i j i j i j i,j i i Definition of AN and continuity yields 2 2 ≤ u u ϕ , u ϕ = a(u, u) γ(µ) λmax u = uT AN u = a i i i j j j Hence λmax ≤ γ(µ) , similar λmin ≥ α(µ) 16.9.2013 B. Haasdonk, RB-Tutorial 37 Institut für Angewandte Analysis und Numerische Simulation RB Method Remark: Difference FEM/RB Let A(µ) denote the FEM (or Finite Volume, Discontinuous Galerkin) matrix The RB matrix AN (µ) ∈ RN×N is small but typically dense in contrast to the typically sparse but large matrix A(µ) ∈ RH×H 16.9.2013 The condition of AN (µ) does not deteriorate with N (if using orthonormal basis, e.g. by Gram Schmidt), while the condition number of A(µ) typically grows polynomial in H. B. Haasdonk, RB-Tutorial 38 Institut für Angewandte Analysis und Numerische Simulation RB Method Relation to Best-Approximation (Lemma of Cea) For all µ ∈ P holds u(µ) − uN (µ) ≤ γ(µ) inf u(µ) − v α(µ) v∈XN Proof: For all v ∈ XN continuity and coercivity result in α u − uN 2 ≤ a(u − uN , u − uN ) = a(u − uN , u − v) + a(u − uN , v − uN ) =0 = a(u − uN , u − v) ≤ γ u − uN v − uN Where a(u − uN , v − uN ) = 0 follows from Galerkin orthogonality as v − uN ∈ XN 16.9.2013 B. Haasdonk, RB-Tutorial 39 Institut für Angewandte Analysis und Numerische Simulation RB Method Remarks: „Quasi-optimality“: RB-scheme is as good as best-approximation up to a constant. Implication: Approximation scheme and space are decoupled: Find a good approximating space (without RB-scheme) you are sure, that the RB-scheme performs well. Similar best-approximation bounds are known for interpolation techniques (via „Lebesgue“-constant). But for interpolation techniques (e.g. polynomial) these constants diverge to infinity for growing dimension of the approximation space. In contrast: the bounding constant in RB-approximation does not grow to infinity with growing dimension. This is a conceptional advantage of Galerkin projection over interpolation techniques. Exercise 7: Assuming symmetric a, the Lemma of Cea can be sharpened by a squareroot in the constants. (Hint: Energy norm, introduced soon) 16.9.2013 B. Haasdonk, RB-Tutorial 40 Institut für Angewandte Analysis und Numerische Simulation RB Method Error-Residual Relation The error satisfies a variational problem with residual as right hand side: For µ ∈ P we define the residual r(·; µ) ∈ X via r(v; µ) := f(v; µ) − a(uN (µ), v; µ), v∈X Then the error e(µ) := u(µ) − uN (µ) satisfies a(e(µ), v; µ) = r(v; µ), v∈X Proof: a(e, v) = a(u, v) − a(uN , v) = f(v) − a(uN , v) = r(v), Remark: Residual vanishes on the RB-space: v∈X v ∈ XN ⇒ r(v) := f(v) − a(uN , v) = a(uN , v) − a(uN , v) = 0 16.9.2013 B. Haasdonk, RB-Tutorial 41 Institut für Angewandte Analysis und Numerische Simulation RB Method Reproduction of Solutions If u(µ) ∈ XN for some µ ∈ P then uN (µ) = u(µ) Proof: e(µ) = u(µ) − uN (µ) ∈ XN hence Remark: α e2 ≤ a(e, e) = r(e) = 0 Reproduction of solutions is a basic consistency property. Holds trivially, if error-bounds are available, but for some more complex RB-schemes this may be all you can get and a good initial consistency check. Validation of Program Code: Choose Basis by snapshots ϕi := u(µ(i) ), i = 1, . . . , N Then we expect uN (µ(i) ) = ei ∈ RN to be a unit vector 16.9.2013 B. Haasdonk, RB-Tutorial 42 Institut für Angewandte Analysis und Numerische Simulation RB Method Uniform Convergence of RB-approximation Assume Lipschitz-continuity of coefficient functions, then u(µ) and uN (µ) are Lipschitz-continuous with Lu independent of N. Assume {SN }N∈N to be sample sets getting dense in P , „fill distance“ hN := sup dist(µ, SN ), µ∈P lim hN = 0 N→∞ Then for all µ and „closest“ µ∗ := arg minµ ∈SN µ − µ u(µ) − uN (µ) ≤ u(µ) − u(µ∗ ) + u(µ∗ ) − uN (µ∗ ) + uN (µ∗ ) − uN (µ) ≤ Lu µ − µ + 0 + Lu µ − µ ≤ 2hN Lu Therefore, we obtain Note: Convergence rate linear in hN is of no practical use 16.9.2013 lim sup u(µ) − uN (µ) = 0 N→∞ µ∈P B. Haasdonk, RB-Tutorial 43 Institut für Angewandte Analysis und Numerische Simulation RB Method Coercivity Constant Lower Bound We assume to have available a rapidly computable lower bound for the coercivity constant 0 < αLB (µ) ≤ α(µ), µ∈P We assume this to be large, w.l.o.g. ᾱ ≤ αLB (µ) (otherwise simply set αLB (µ) := ᾱ ) Continuity Constant Upper Bound We assume to have available a rapidly computable upper bound for the continuity constant γUB (µ) ≥ γ(µ), 16.9.2013 µ∈P We assume this to be small, w.l.o.g. γ̄ ≥ γUB (µ) (otherwise simply set γUB (µ) := γ̄ ) B. Haasdonk, RB-Tutorial 44 Institut für Angewandte Analysis und Numerische Simulation RB Method A-posteriori Error Bounds For all µ ∈ P holds r(·; µ)X u(µ) − uN (µ) ≤ ∆u (µ) := αLB (µ) |s(µ) − sN (µ)| ≤ ∆s (µ) := l(·; µ)X ∆u (µ) Proof: testing the error-residual eqn. with e yields αLB (µ) e2 ≤ a(e, e) = r(e) ≤ rX e division then yields the bound for u. Bound for output error follows with continuity |s − sN | = |l(u) − l(uN )| = |l(u − uN )| ≤ l(·; µ)X ∆u (µ) 16.9.2013 Note: Output bound is coarse, can be improved (see later) B. Haasdonk, RB-Tutorial 45 Institut für Angewandte Analysis und Numerische Simulation RB Method Remark: 16.9.2013 General pattern: Derive error-residual relation, then apply stability statement to obtain an error bound. If u is the continous solution in infinite X, then the bound is „a-priori“, as the residual norm is not computable. In case of RB methods: If u is the FEM solution in finitedimensional X, the residual norm is computable, hence the error bound turns into a computable quantity. It is „a-posteriori“: reduced solution must be available. „Rigorosity“: As the bound is a provable upper bound on the error, the bound is denoted „rigorous“ in RB methods (corresponding to „reliable“ error estimators in FEM literature) RB method with a-posteriori error control is denoted a „certified“ RB method B. Haasdonk, RB-Tutorial 46 Institut für Angewandte Analysis und Numerische Simulation RB Method Vanishing Error Bound / Zero Error Prediction If u(µ) = uN (µ) then ∆u (µ) = ∆s (µ) = 0 Proof: e = 0 ⇒ 0 = a(e, v) = r(v) ⇒ rX = 0 ⇒ ∆u = 0 ⇒ ∆s = 0 Remark: 16.9.2013 Initial desired property of an error bound: Bound is zero if the error is zero. This may give hope, that the error bound is not too conservative, i.e. not too large overestimating the error. The statement is trivial in case of „effective“ error bounds as seen soon. But if no „effective“ error bounds are available for a more complex RB scheme, this may be as much as you can get, or a useful initial property of an error estimator. This property is again useful for validating program code B. Haasdonk, RB-Tutorial 47 Institut für Angewandte Analysis und Numerische Simulation RB Method (Uniform) Effectivity Bound The „effectivity“ ηu (µ) of ∆u (µ) is defined and bounded by ηu (µ) := ∆u (µ) u(µ)−uN (µ) ≤ γU B (µ) αLB (µ) ≤ γ̄ ᾱ , µ∈P Proof: Test error eqn. with Riesz-repr. vr ∈ X of residual: vr 2 = vr , vr = r(vr ) = a(e, vr ) ≤ γUB (µ) e vr Therefore vr e ηu (µ) = ≤ γUB (µ) and ∆u (µ) e(µ) = vr αLB (µ)e ≤ γUB (µ) αLB (µ) ≤ γ̄ ᾱ Remark 16.9.2013 Upper bound on the effectivity can be evaluated rapidly Related notion „efficiency“ in FEM literature. „Rigorosity“ of error bound implies ηu (µ) ≥ 1 B. Haasdonk, RB-Tutorial 48 Institut für Angewandte Analysis und Numerische Simulation RB Method Relative Error Bound and Effectivity (cf. [PR06]) For all µ ∈ P holds r(·; µ)X u(µ) − uN (µ) 1 ≤ ∆rel (µ) := 2 · · u u(µ) αLB (µ) uN (µ) rel ∆ γUB (µ) γ̄ u (µ) rel ≤3· ≤3· . ηu (µ) := e(µ) / u(µ) αLB (µ) ᾱ under the condition that ∆rel u (µ) ≤ 1 Exercise 8: Prove this relative error bound and effectivity bound Remark: 16.9.2013 Relative bounds are typically only valid if the bound is sufficiently small. If these are not small, the RB space should be improved. B. Haasdonk, RB-Tutorial 49 Institut für Angewandte Analysis und Numerische Simulation RB Method Remark: No Effectivity for Output Error Bound Without further assumptions, one cannot expect a bounded effectivity for the output error estimator ∆s (µ) / u(µ) Example: Choose XN and µ such that uN (µ) = Then also e(µ), r(µ), ∆u (µ), ∆s (µ) are nonzero. Now choose l such that l(u − uN ) = 0 ⇒ s(µ) − sN (µ) = l(e) = 0 Hence ∆s (µ) |s(µ)−sN (µ)| is not well defined. (A4) Symmetry: 16.9.2013 For the remainder of this section, we additionally assume, that a(·, ·; µ) is symmetric. B. Haasdonk, RB-Tutorial 50 Institut für Angewandte Analysis und Numerische Simulation RB Method Energy norm For symmetric, coercive, continuous a(·, ·; µ) we define the (µ -dependent) energy scalar product and norm vµ := v, vµ , u, vµ := a(u, v; µ) Norm Equivalence u, v ∈ X We have α(µ) u ≤ uµ ≤ γ(µ) u , u ∈ X, µ ∈ P Proof: Coercivity and Continuity imply α(µ) u2 ≤ a(u, u; µ) ≤ γ(µ) u2 =u2µ 16.9.2013 B. Haasdonk, RB-Tutorial 51 Institut für Angewandte Analysis und Numerische Simulation RB Method Energy Norm Error bound and Effectivity [PR06] For µ ∈ P holds r(·; µ)X u(µ) − uN (µ)µ ≤ ∆en (µ) := u αLB (µ) en ∆u (µ) γUB (µ) γ̄ ≤ ≤ , ηuen (µ) := u(µ) − uN (µ)µ αLB (µ) ᾱ As µ∈P γ(µ) ≥ 1 this is an improvement by a squareroot α(µ) Exercise 9: Prove this energy error bound and effectivity bound 16.9.2013 B. Haasdonk, RB-Tutorial 52 Institut für Angewandte Analysis und Numerische Simulation RB Method Remark: Possible Improvement by Changing Norm By choosing µ̄ ∈ P and setting u := uµ̄ as new norm on X, we get α(µ̄) = 1 = γ(µ̄) The RB-approximation is not affected But the error bound and effectivities are improved: They are optimal in µ̄ : ∆u (µ̄) = e(µ̄) , ηu (µ̄) = 1 and (assuming continuity) almost optimal in the vicinity of µ̄ In the following: return to arbitrarily chosen norm on X 16.9.2013 B. Haasdonk, RB-Tutorial 53 Institut für Angewandte Analysis und Numerische Simulation RB Method Improved Output Error Bound & Effectivity, Compliant Case Assume that a(·, ·; µ) is symmetric and f = l (the so called „compliant“ case), then we obtain the improved output error bound and effectivity Remark: 2 r(·; µ) X 0 ≤ s(µ) − sN (µ) ≤ ∆s (µ) := αLB (µ) ∆ γUB (µ) γ̄ s (µ) ≤ ≤ ηs (µ) := − s(µ) sN (µ) αLB (µ) ᾱ Proof: Follows later from more general statement The bound gives a definite sign on the error: sN (µ) ≤ s(µ) This output error bound ∆s (µ) is better as it is quadratic in rX while ∆s (µ) is only linear The thermal block is a „compliant“ problem. 16.9.2013 B. Haasdonk, RB-Tutorial 54 Experiments Institut für Angewandte Analysis und Numerische Simulation Experiments Thermal Block 16.9.2013 rb_tutorial(1): Full simulation, solution variety as seen earlier rb_tutorial(2): Demo gui for full simulation: rb_tutorial(3) All steps for generation of reduced model and timing B. Haasdonk, RB-Tutorial 56 Institut für Angewandte Analysis und Numerische Simulation Experiments Error Estimator and True Error rb_tutorial(4): Lagrangian basis for N=5 B1 = B2 = 2 SN = (0.1, 0.1, 0.1, 0.1) (0.5, 0.1, 0.1, 0.1) (0.9, 0.1, 0.1, 0.1) (1.3, 0.1, 0.1, 0.1) (1.7, 0.1, 0.1, 0.1) Parameter sweep for estimator is cheap Estimator and error are zero for samples Estimator is upper bound of true error For small parameters larger error, hence more samples would be required 16.9.2013 B. Haasdonk, RB-Tutorial 57 Institut für Angewandte Analysis und Numerische Simulation Experiments Effectivity and Bounds: rb_tutorial(5) α(µ) = min(µi ) = 0.1 γ(µ) = max(µi ) = µ1 Effectivities are good, only order of 10 Effectivity upper bound is verified Effectivity undefined for basis samples (division by zero) 16.9.2013 B. Haasdonk, RB-Tutorial 58 Institut für Angewandte Analysis und Numerische Simulation Experiments Error Convergence: rb_tutorial(6): B1 = B2 = 3, µ = (µ1 , 1, 1, 1 . . . , 1) N equidistant samples Gram-Schmidt orth. Test-error/estimator: maximum over random test set Stest ⊂ P µ1 ∈ [0.5, 2] |Stest | = 100 Exponential error/bound convergence observed Upper bound very tight Numerical accuracy limit for estimators 16.9.2013 B. Haasdonk, RB-Tutorial 59 Institut für Angewandte Analysis und Numerische Simulation Experiments Error Convergence: 16.9.2013 Gram-Schmidt orthonormalized basis: rb_tutorial(7) B. Haasdonk, RB-Tutorial 60 Offline/Online Decomposition Institut für Angewandte Analysis und Numerische Simulation Offline/Online Decomposition Offline/Online Decomposition offline phase online phase uN (µ1 ), . . . , uN (µk ) time Offline Phase: Possibly computationally intensive, depending on H := dim(X) Performed only once Computation of snapshots, reduced basis, Riesz-representers and auxiliary parameter-independent low-dim. quantities Online Phase: Rapid, i.e. complexity polynomial in N, Qa , Qf , Ql , independent of H Performed multiple times for different parameters Assembly and solution of RB-system, computation of error estimators and effectivity bounds. 16.9.2013 B. Haasdonk, RB-Tutorial 62 Institut für Angewandte Analysis und Numerische Simulation Offline/Online Decomposition Required: Discretization of (P) Space X = span{ψi }H i=1 , high dimension H := dim(X) H×H Inner Product Matrix K := (ψi , ψj )H ∈ R i,j=1 Assume component matrices and vectors H×H Aq := (aq (ψj , ψi ))H i,j=1 ∈ R H ∈ f q := (fq (ψi ))H R i=1 For any µ ∈ P evaluate coefficients & assemble full system A(µ) := H lq := (lq (ψi ))H i=1 ∈ R Qa a q=1 θq (µ)Aq , f (µ) := Qf f q=1 θq (µ)f q , l(µ) := Ql l q=1 θq (µ)lq H Solve linear system A(µ)u(µ) = f (µ) for u(µ) = (ui )H i=1 ∈ R H Obtain solution of (P): u(µ) = i=1 ui ψi , s(µ) := lT u Remark: 16.9.2013 Components may be nontrivial for third-party-software! B. Haasdonk, RB-Tutorial 63 Institut für Angewandte Analysis und Numerische Simulation Offline/Online Decomposition Offline/Online Decomposition of (PN) Offline: After the computation of a basis ΦN = {ϕi }N i=1 construct the parameter-independent component matrices and vectors N×N AN,q := (aq (ϕj , ϕi ))N i,j=1 ∈ R N f N,q := (fq (ϕi ))N i=1 ∈ R N lN,q := (lq (ϕi ))N i=1 ∈ R Online: For given µ ∈ P evaluate the coefficient functions and assemble the matrix and vectors AN (µ) := Qa q=1 θqa (µ)AN,q , f N (µ) := Qf θqf (µ)f N,q , q=1 lN (µ) := Ql θql (µ)lN,q q=1 This exactly gives the discrete RB system AN (µ)uN (µ) = f N (µ) stated earlier, that can then be solved and gives uN (µ), sN (µ) 16.9.2013 B. Haasdonk, RB-Tutorial 64 Institut für Angewandte Analysis und Numerische Simulation Offline/Online Decomposition Remark: Simple Computation of Reduced Components The reduced component matrices/vectors do not require any space-integration, if the high dimensional components are available: Assume expansion of reduced basis vectors ϕj = H i=1 ϕij ψi With coefficient matrix H×N ΦN := (ϕij )H,N i,j=1 ∈ R Reduced components are then simply obtained by matrix-matrix/matrix-vector multiplications AN,q = ΦTN Aq ΦN , 16.9.2013 f N,q = ΦTN f q , B. Haasdonk, RB-Tutorial lN,q = ΦTN lq 65 Institut für Angewandte Analysis und Numerische Simulation Offline/Online Decomposition Complexities of (PN) Offline: O(NH 2 + NH(Qf + Ql ) + N 2 HQa ) Online: O(N 3 + N(Qf + Ql ) + N 2 Qa ) independent of H Runtime Diagram (P) Runtime for k simulations With (P): t = k · tfull With (PN): t = toff line + k · tonline Intersection k∗ = t toff line tf ull −tonline (PN) k∗ k ACHTUNG: RB Payoff only for „multiple“ requests 16.9.2013 RB model offline time only pays off if sufficiently many k ≥ k∗ reduced simulations are expected. B. Haasdonk, RB-Tutorial 66 Institut für Angewandte Analysis und Numerische Simulation Offline/Online Decomposition Remark: No Distinction between u and uh Remember, we did not discriminate in (P) between the true weak (Sobolev) space solution u and the fine FEM solution, say uh (we only do this for this slide). This can be motivated by two arguments: 1. In view of the independency of the online phase on H, we can assume u − uh arbitrary small, hence H arbitrary large (just let the offline phase be sufficiently accurate) without affecting the online runtime. 2. In practice, the reduction error will dominate the overall error, the FEM error is neglegible ε := u − uh uh − uN Then it is sufficient to control uh − uN uh − uN − ε ≤ u − uN ≤ uh − uN + ε 16.9.2013 B. Haasdonk, RB-Tutorial uN (µ) uh (µ) u(µ) 67 Institut für Angewandte Analysis und Numerische Simulation Offline/Online Decomposition Requirements for Error and Effectivity Bounds We require offline/online decompositions of the following quantities if we want to compute a-posteriori and effectivity bounds rapidly: Dual norm of the residual r(·; µ)X for all error bounds Dual norm of output functional l(·; µ)X for output error bound ∆s (µ) rel Norm of RB-solution uN (µ) for relative error bound ∆u (µ) Lower coercivity constant bound αLB (µ) for all error and effectivy bounds Upper bound for continuity constant γUB (µ) for effectivity upper bound 16.9.2013 B. Haasdonk, RB-Tutorial 68 Institut für Angewandte Analysis und Numerische Simulation Offline/Online Decomposition Parameter Separability of Residual Set Qr := Qf + NQa and define rq ∈ X , q = 1, . . . , Qr via (r1 , . . . , rQr ) := f1 , . . . , fQf , a1 (ϕ1 , ·), . . . , aQa (ϕ1 , ·), . . . , a1 (ϕN , ·), . . . , aQa (ϕN , ·)) N Let uN (µ) = i=1 uNi ϕi be solution of (PN) Define θqr (µ), q = 1, . . . , Qr via f f r a a (θ1r , . . . , θQ ) := θ , . . . , θ , −θ · u , . . . , −θ · uN1 , N1 1 1 Q Qf r a . . . , −θ1a · a uNN , . . . , −θQ a · uNN ∈ X denote the Riesz-representers of r, rq Let vr , vr,q Then r, vr are parameter separable via r(v; µ) = Qr q=1 16.9.2013 θqr (µ)rq (v), vr (µ) = Qr θqr (µ)vr,q , q=1 µ ∈ P, v ∈ X Proof: By definition and linearity B. Haasdonk, RB-Tutorial 69 Institut für Angewandte Analysis und Numerische Simulation Offline/Online Decomposition Computation of Riesz-Representers H K := (ψ , ψ ) Recall: X = span{ψi }H , i j i,j=1 i=1 H For g ∈ X the coefficient vector v = (vi )H of its i=1 ∈ R H Riesz-representer vg = i=1 vi ψi ∈ X is obtained by solving the sparse linear system Kv = g with right hand side vector g = (g(ψi ))H i=1 H Proof: For any u = i=1 ui ψi with coefficient vector u = (ui )H i=1 we verify g(u) = H i=1 16.9.2013 ui g(ψi ) = uT g = uT Kv = H ui ψi , i=1 B. Haasdonk, RB-Tutorial H j=1 vj ψj = vg , u 70 Institut für Angewandte Analysis und Numerische Simulation Offline/Online Decomposition Offline/Online Decomposition of Dual Norm of Residual Offline: After the offline-phase of (PN) we compute the Riesz-representers vr,q ∈ X of the residual components rq ∈ X and define the matrix r Qr ×Qr Gr := (rq (vr,q ))Q q,q =1 ∈ R Online: For given µ ∈ P and RB-solution uN (µ) compute r the residual coefficient vector θr (µ) := (θ1r (µ), . . . , θQ (µ)) and r r(·; µ)X Proof: G is symmetric as rq (vr,q ) = vr,q , vr,q , then r(·; µ)2X 16.9.2013 = θr (µ)T Gr θr (µ) 2 = vr = Qr r (µ)vr,q , θ q q=1 Qr q =1 θqr (µ)vr,q = θr (µ)T Gr θr (µ) B. Haasdonk, RB-Tutorial 71 Institut für Angewandte Analysis und Numerische Simulation Offline/Online Decomposition Offline/Online Decomposition for l(·; µ)X Completely analogous as for dual norm of residual: Offline: compute the Riesz-representers vl,q ∈ X of the output functional components lq ∈ X and define l Ql ×Ql Gl := (lq (vl,q ))Q ∈ R q,q =1 Online: For given µ ∈ P compute the output coefficient l vector θl (µ) := (θ1l (µ), . . . , θQ (µ)) and l l(·; µ)X 16.9.2013 = θl (µ)T Gl θl (µ) B. Haasdonk, RB-Tutorial 72 Institut für Angewandte Analysis und Numerische Simulation Offline/Online Decomposition Offline/Online Decomposition for uN (µ) Offline: After the basis generation, compute the reduced inner product matrix N×N K N := (ϕi , ϕj )N i,j=1 ∈ R Online: For given µ ∈ P and RB solution uN (µ) with coefficient vector uN (µ) ∈ RN we obtain Remark uN (µ) = uN (µ)T K N uN (µ) Simple computation via basis matrix multiplication: K N := ΦTN KΦN 16.9.2013 B. Haasdonk, RB-Tutorial 73 Institut für Angewandte Analysis und Numerische Simulation Offline/Online Decomposition „Min-Theta“ Approach for Coercivity Lower Bound One approach that can be applied in certain cases: Assume that the components satisfy aq (u, u) ≥ 0, q = 1, . . . , Qa and the coefficients fulfill θqa (µ) > 0, q = 1, . . . , Qa Let µ̄ ∈ P such that α(µ̄) is available. Then we have 0 < αLB (µ) ≤ α(µ), µ∈P with the lower bound αLB (µ) := α(µ̄) · 16.9.2013 min q=1,...,Qa θqa (µ) θqa (µ̄) (No symmetry required) B. Haasdonk, RB-Tutorial 74 Institut für Angewandte Analysis und Numerische Simulation Offline/Online Decomposition Computation of α(µ) for (P) In offline-phase some evaluations of α(µ) may be required, e.g. for Min-theta or other procedures. H K := (ψ , ψ ) Let A := (a(ψj , ψi ; µ))H and i j i,j=1 be given. i,j=1 Define symmetric part As := 12 (A + AT ) , then α(µ) = λmin (K −1 As ) Proof: Assume K = LLT , use substitution v = LT u in a(u, u) uT As u vT L−1 As L−T v α(µ) = inf = infH T = infH u∈X u2 u Ku vT v u∈R v∈R Hence, alpha minimizes Rayleigh-quotient, i.e. α(µ) = λmin (L−1 As L−T ) K −1 As and L−1 As L−T are similar thus have identical λmin : LT (K −1 As )L−T = LT L−T L−1 As L−T = L−1 As L−T 16.9.2013 B. Haasdonk, RB-Tutorial 75 Institut für Angewandte Analysis und Numerische Simulation Offline/Online Decomposition Remark: Prevent Inversion of K: Inversion of K frequently badly conditioned, fill-in-effect, etc., hence prevention of inversion is recommended: Reformulation as generalized Eigenvalue problem: K −1 As u = λu ⇔ As u = λKu and determine smallest generalized eigenvalue Remark: Computation of Continuity Constant & Bound Similar: Computation of continuity constant via largest singular value of suitable matrix. Then one can formulate max-theta approach for a continuity constant upper bound Exercise 10: Formulate a Max-Theta approach for a continuity constant upper bound γUB (µ), under the assumptions, that a(·, ·; µ) is symmetric, all aq (·, ·) are positive semidefinite, θqa (µ) > 0 and γ(µ̄) is available for one µ̄ ∈ P 16.9.2013 B. Haasdonk, RB-Tutorial 76 Institut für Angewandte Analysis und Numerische Simulation Offline/Online Decomposition Complexities of Error Estimators ∆u (µ), ∆s (µ) (Including Min-theta) 3 2 2 2 Offline: O(H + H (Qf + Ql + NQa ) + H(Qf + NQa ) + HQl ) 2 2 Online: O((Qf + NQa ) + Ql + Qa ) independent of H Very clear: Online quadratic dependence on Qa , Qf , Ql , this can become prohibitive in case of too large expansions Remark: Successive Constraint Method [HRSP07] 16.9.2013 Alternative to Min-Theta Offline: Computation of many α(µ(i) ), i = 1, . . . , M Online: solution of a small linear program for computing coercivity lower bound (or similar continuity upper bound) B. Haasdonk, RB-Tutorial 77 Basis Generation Institut für Angewandte Analysis und Numerische Simulation Basis Generation Recall: „Lagrangian“ Reduced Basis Let parameter samples be given SN = {µ(1) , . . . , µ(N) } ⊂ P Define „Lagrangian“ RB-Space and Basis XN := span{u(µ(i) )}N i=1 = spanΦN , Remarks: ΦN := {ϕ1 , . . . , ϕN } Good approximation globally in P is possible, subject to suitably distributed points. This is in contrast to local approximation, e.g. first order Taylor basis as used in early RB literature [FR83]: ΦN := {u(µ(0) , ∂µi u(µ(0) , . . . , ∂µp u(µ(0) )))} Central Questions: 16.9.2013 How to select sample points? How good will the basis be? For which problems will it work? B. Haasdonk, RB-Tutorial 79 Institut für Angewandte Analysis und Numerische Simulation Basis Generation Optimal RB Space XN := arg min Y ⊂X dim(Y )=N E(XN ) E(XN ) := sup u(µ) − uN (µ) µ∈P Highly nonlinear optimization problem for N-dimensional space, practically infeasible Modifications for practical „Greedy Procedure“: 16.9.2013 Iterative relaxation: Instead of one optimization problem for complete basis, incrementally search „next best vector“ and extend existing basis Instead of optimization over parameter space perform maximum search over training set of parameters Allow general error indicator ∆(Y, µ) ∈ R+ as substitute for u(µ) − uN (µ) (using XN := Y ) B. Haasdonk, RB-Tutorial 80 Institut für Angewandte Analysis und Numerische Simulation Basis Generation Greedy Procedure [VPRP03] Let Strain ⊆ P be a given training set of parameters and εtol > 0 a given error tolerance. Set Φ0 := ∅, X0 := {0}, S0 := ∅ and define iteratively max ∆(Xn , µ) > εtol while εn := µ∈Strain µ(n+1) := arg max ∆(Xn , µ) µ∈Strain (n+1) Sn+1 := Sn ∪ {µ ϕn+1 := u(µ(n+1) ) } Φn+1 := Φn ∪ {ϕn+1 } Xn+1 := Xn + span{ϕn+1 } end while Finally set N := n + 1 16.9.2013 B. Haasdonk, RB-Tutorial 81 Institut für Angewandte Analysis und Numerische Simulation Basis Generation Remarks: First use of Greedy in RB in [VPRP03] In literature also frequently first „search“ is skipped by arbitrarily choosing µ(1) The training set is mostly chosen as random or structured finite subset of P Orthonormalization by Gram-Schmidt can be added in loop Termination: Simple criterion: If for all µ ∈ P and all subspaces Y ⊂ X holds u(µ) ∈ Y ⇒ ∆(Y, µ) = 0 16.9.2013 then the Greedy algorithm terminates in at most |Strain | steps. Reason: No sample will be selected twice. Basis is hierarchical: Φn ⊂ Φm , n < m B. Haasdonk, RB-Tutorial 82 Institut für Angewandte Analysis und Numerische Simulation Basis Generation Choice of Error Indicators i) Orthogonal projection error as indicator ∆(Y, µ) := inf v∈Y u(µ) − v = u(µ) − PY u(µ) Motivation: If projection error is small then with „Cea“ also RB-error is small -Expensive to evaluate, high dimensional operations -All snapshots for all training parameters must be computed and stored, |Strain | thus limited. +Termination criterion trivially satisfied +Approximation space decoupled from RB scheme +Can be applied without RB-scheme and without aposteriori error estimators 16.9.2013 B. Haasdonk, RB-Tutorial 83 Institut für Angewandte Analysis und Numerische Simulation Basis Generation Choice of Error Indicators ii) True RB error as indicator ∆(Y, µ) := u(µ) − uN (µ) Motivation: This directly is the error measure used in E(XN ) -Expensive to evaluate, high dimensional operations -All snapshots for all training parameters must be computed and stored, |Strain | thus limited. +Termination criterion satisfied in case of „Reproduction of Solutions“ property +Can be applied without a-posteriori error estimators 16.9.2013 B. Haasdonk, RB-Tutorial 84 Institut für Angewandte Analysis und Numerische Simulation Basis Generation Choice of Error Indicators iii) A-posteriori error estimator as indicator: ∆(Y, µ) := ∆u (µ) (or energy or relative error bounds) Motivation: Minimizing this ensures that true RB-error also is small, if bounds are „rigorous“ +Cheap to evaluate, only low dimensional operations +Only N snapshots must be computed, |Strain | can be very large. +Termination criterion satisfied in case of „Vanishing Error Bound“ and „Reproduction of Solutions“ property -If a-posteriori error bound is overestimating the RB error much then the space may be not good 16.9.2013 B. Haasdonk, RB-Tutorial 85 Institut für Angewandte Analysis und Numerische Simulation Basis Generation Goal-Oriented Indicators: When using output-error or output error estimators ∆(Y, µ) := |s(µ) − sN (µ)| in the greedy procedure, the procedure is called „goal oriented“. The basis will be possibly quite small, very accurately approximating the output, but not necessarily approximating the field variable well. When using field-oriented indicators en ∆(Y, µ) := ∆u (µ), ∆rel u (µ), ∆u (µ) in the greedy procedure, the basis may be larger, well approximating both the field variable and the output. 16.9.2013 B. Haasdonk, RB-Tutorial 86 Institut für Angewandte Analysis und Numerische Simulation Basis Generation Monotonicity \ ∆(Xn+1 , µ) ≤ ε In general ∆(Xn , µ) ≤ ε ⇒ This means, that greedy error sequence (εn )n≥1 may be non monotonic If relation to best-approximation holds ∆(Xn , µ) ≤ C inf v∈Xn u(µ) − v at least a boundedness or even asymptotic decay can be expected Monotonicity, however, can be proven in special cases: Exercise 11: Prove that the Greedy algorithm produces monotonically decreasing error sequences (εn )n≥1 if i) ∆(Y, µ) := u(µ) − PY u(µ), i.e. indicator chosen as orth. projection error ii) in compliant case ( a(·, ·; µ) symmetric and l = f ) and ∆(Y, µ) := ∆en u (µ) , i.e. indicator chosen as energy error estimator. 16.9.2013 B. Haasdonk, RB-Tutorial 87 Institut für Angewandte Analysis und Numerische Simulation Basis Generation Remark: Overfitting, Quality Measurement In terms of statistical learning theory, Strain is a „training set“ of parameters and εN is the „training error“ Strain must represent P well, should be chosen as large as possible If training set is chosen too small or unrepresentative „overfitting“ will occur, i.e. maxµ∈P ∆(XN , µ) εN 16.9.2013 => Low training error is a necessary but not a sufficient criterion for a good model (example „notepad“) => Never compare models only by training error. Use error on independent „test-set“ instead. B. Haasdonk, RB-Tutorial 88 Institut für Angewandte Analysis und Numerische Simulation Basis Generation Practice/Theory Gap: Rb_tutorial(8): B1 = B2 = 2, µ ∈ P = [0.5, 2]4 Greedy with random Strain ⊂ P |Strain | = 1000 Estimator ∆(Y, µ) := ∆u (µ) Gram-Schmidt orth. Test-error/estimator: maximum over random test set Stest ⊂ P |Stest | = 100 Exponential error decay observed 16.9.2013 So Greedy is a well performing heuristic procedure Formal convergence statements for analytical foundation? B. Haasdonk, RB-Tutorial 89 Institut für Angewandte Analysis und Numerische Simulation Basis Generation Kolmogorov n-width dn(M) Maximum approximation error of best linear subspace dn (M) := inf Y ⊂X dim(Y )=n sup u − PY u u∈M Decay indicates „approximability by linear subspaces“ (dn (M))n∈N is a monotonically decreasing sequence Examples Unit balls: bad approximation, no decay M = {u | u ≤ 1} ⊂ H 1 ([0, 1]) „Cereal Box“: good approximation, exponential decay [−2−i , 2−i ] ⊂ l2 (R) i∈N 16.9.2013 dn (M) = 1, n ∈ N B. Haasdonk, RB-Tutorial dn (M) ≤ C · 2−n , n ∈ N 90 Institut für Angewandte Analysis und Numerische Simulation Basis Generation Greedy Convergence Rates [BCDDPW10], [BMPPT09] If M is well approximable by linear spaces, then the Greedy procedure will provide a quasi-optimal subspace: Let Strain = P be compact and the greedy selection criterion guarantee (for suitable γ ∈ (0, 1] ) u(µ(n+1) ) − PXn u(µ(n+1) ) ≥ γ sup u − PXn u u∈M Then we can obtain algebraic convergence: dn (M) ≤ Mn−α , n > 0 ⇒ εn ≤ CMn−α , n > 0 Or exponential convergence: −anα dn (M) ≤ Me ,n > 0 ⇒ −cnβ εn ≤ CMe ,n > 0 (For suitable constants) 16.9.2013 B. Haasdonk, RB-Tutorial 91 Institut für Angewandte Analysis und Numerische Simulation Basis Generation Strong vs. Weak Greedy If γ = 1 it is a „Strong Greedy“ If γ < 1 it is a „Weak Greedy“ Strong Greedy can be realized by ∆(Y, µ) := u(µ) − PY u(µ) Error Estimator ∆(Y, µ) := ∆u (µ) Results in Weak Greedy! Thanks to Cea, Effectivity and error bound properties: (n+1) (n+1) (n+1) ) − PXn u(µ ) = inf u(µ ) − v u(µ v∈XN α(µ) α(µ) (n+1) (n+1) ≥ ) − uN (µ ) ≥ ∆u (µ(n+1) ) u(µ γ(µ) γ(µ)ηu (µ) α(µ) α(µ) = sup ∆u (µ) ≥ sup u(µ) − uN (µ) γ(µ)ηu (µ) µ∈P γ(µ)ηu (µ) µ∈P α(µ) ᾱ2 sup u(µ) − PXN u(µ) ≥ 2 sup u(µ) − PXN u(µ) . ≥ γ(µ)ηu (µ) µ∈P γ̄ µ∈P 16.9.2013 Hence, weakness factor γ = (ᾱ/γ̄)2 ∈ (0, 1] B. Haasdonk, RB-Tutorial 92 Institut für Angewandte Analysis und Numerische Simulation Basis Generation Problem Reformulation For which instantiations of (P) do we get exponential decaying Kolmogorov n-width? Clearly not for all (P): imagine M a „sphere filling curve“ Positive example given by ([MPT02],[PR06]), specialization for the thermal block: Global Exponential Convergence for p=1 16.9.2013 Consider (P) to be the thermal block with B1 = 2, B2 = 1, µ1 = 1 and single parameter µ = µ2 ∈ P Let P := [µmin , µmax ] and N0 be sufficiently large Choose µmin = µ(1) < . . . < µ(N) = µmax logarithmically equidistant and XN the corresponding RB-space Then u(µ) − uN (µ)µ − N−1 u(µ)µ ≤e N0 −1 , µ ∈ P, N ≥ N0 . B. Haasdonk, RB-Tutorial 93 Institut für Angewandte Analysis und Numerische Simulation Basis Generation Training Set Treatment 16.9.2013 Multistage greedy [Se08] (0) (m) ⊂ . . . ⊂ Strain := Strain . Decompose in coarser sets Strain Run Greedy on coarsest set, then start greedy on next larger set with first basis as starting basis, etc. Adaptive Extension [HDO11] Large coarse Stop greedy when overfitting train set Locally extend training set Full Optimization: [UVZ12] Optimization in greedy loop Randomization [HSZ13] Adaptive refined In each greedy step new random training set B. Haasdonk, RB-Tutorial 94 Institut für Angewandte Analysis und Numerische Simulation Basis Generation Parameter Domain Partitioning 16.9.2013 Complex problems may require infeasibly large basis N ≤ Nmax , εN ≤ εtol can not simultaneously be satisfied Solution: Partitioning of P, one basis per subdomain hp-RB [EPR10]: P-Partitioning: [HDO11]: adaptive bisection adaptive hexahedral refinement B. Haasdonk, RB-Tutorial 95 Institut für Angewandte Analysis und Numerische Simulation Basis Generation Gramian Matrices Revisited For {ui }ni=1 ⊂ X we define the Gramian matrix G := (ui , uj )ni,j=1 ∈ Rn×n We have seen such matrices play an important role in offline/online decomposition They allow to perform some further operations independent of H They have some nice properties: exercise Exercise 12: Show that the following holds for the Gramian matrix: i) G is symmetric and positive semidefinite ii) rank(G) = dim(span({ui }n i=1 )) G is positive definite iii) {ui }n i=1 are linearly independent ⇔ 16.9.2013 B. Haasdonk, RB-Tutorial 96 Institut für Angewandte Analysis und Numerische Simulation Basis Generation Orthonormalization: Gram Schmidt Useful for improving condition of the RB system matrix Let basis ΦN = {ϕi }N i=1 ⊂ X be given with Gramian matrix K N Set C := (LT )−1 with L being a Cholesky factor of K N = LLT Define the transformed basis Φ̃N := {ϕ̃i }N i=1 ⊂ X by ϕ̃j := N i=1 Cij ϕi Then Φ̃N is the Gram-Schmidt orthonormalization of ΦN Exercise 13: Prove that the above indeed performs Gram-Schmidt orthonormalization, i.e. set for i = 1, . . . , N vi := ϕi − i−1 j=1 ϕ̄j , ϕi ϕ̄j And show that ϕ̄j = ϕ̃j , j = 1, . . . , N 16.9.2013 ϕ̄i := vi / vi B. Haasdonk, RB-Tutorial 97 Primal-Dual RB Approach Institut für Angewandte Analysis und Numerische Simulation Primal-Dual RB Approach Recall: For nonsymmetric, noncompliant case, we could only obtain an output-error estimator ∆s (µ), that only scaled linear with rX , and we showed the impossibility of obtaining effectivity bounds without further assumptions In contrast, for the compliant case, the output error estimator ∆s (µ) scaled quadratically in rX and we obtained effectivity bounds. Goal of this section: 16.9.2013 Improved output estimation for general nonsymmetric and/or noncompliant case by primal-dual techniques (but still no output effectivity bounds) (P) and (PN) are still required as „primal“ problems B. Haasdonk, RB-Tutorial 99 Institut für Angewandte Analysis und Numerische Simulation Primal-Dual RB Approach Definition: Full „Dual“ Problem (Pdu) For µ ∈ P find a solution udu (µ) ∈ X a(v, udu (µ); µ) = −l(v; µ), satisfying ∀v ∈ X Remark: 16.9.2013 Obviously, the (negative) output functional is used as right hand side and the „arguments“ are exchanged on the left. Well-posedness (existence, uniqueness and stability) follow identical to „primal“ Problem (P) The dual problem only is required formally as reference, to which the dual error will be measured. Additionally, it can be used in practice to generate dual snapshots. B. Haasdonk, RB-Tutorial 100 Institut für Angewandte Analysis und Numerische Simulation Primal-Dual RB Approach Dual RB Space We assume to have a dual RB-space du ⊂ X, XN 16.9.2013 dim XN = N du that approximates the dual solutions udu (µ) well, / N possibly N du = du = XN Possible choice (without guarantee of success!) XN Alternatives: Greedy procedure for (Pdu) using snapshots of the full dual problem; Further alternative: combined approach; details explained at end of this section. B. Haasdonk, RB-Tutorial 101 Institut für Angewandte Analysis und Numerische Simulation Primal-Dual RB Approach Definition: Primal-Dual Reduced Problem (PN) For µ ∈ P find the solution uN (µ) ∈ XN of (PN), du a solution udu N (µ) ∈ XN satisfying a(v, udu N (µ); µ) = −l(v; µ), du ∀v ∈ XN and the corrected output sN (µ) ∈ R sN (µ) := l(uN (µ); µ) − r(udu N (µ); µ) Remarks: 16.9.2013 Well-posedness holds again via Lax-Milgram „dual-weighted-residual“ treatment as in goal-oriented FEM literature B. Haasdonk, RB-Tutorial 102 Institut für Angewandte Analysis und Numerische Simulation Primal-Dual RB Approach Dual A-posteriori Error and Effectivity Bound We introduce the dual residual rdu (·; µ) ∈ X rdu (v; µ) := −l(v; µ) − a(v, udu N (µ); µ)), v∈X and obtain the a-posteriori error bound du du r (·; µ)X du du u (µ) − uN (µ) ≤ ∆u (µ) := αLB (µ) with effectivity bound 16.9.2013 du ∆ γUB (µ) γ̄ u (µ) du ηu (µ) := du ≤ ≤ du αLB (µ) ᾱ u (µ) − uN (µ) Proof: Completely analogous to the primal problem B. Haasdonk, RB-Tutorial 103 Institut für Angewandte Analysis und Numerische Simulation Primal-Dual RB Approach Improved Output A-posteriori Error Bound For µ ∈ P holds du − ) = l(u u ) + r(u Proof: s − sN = l(u) − l(uN ) + r(udu N N N) du r(·; µ)X r (·; µ)X |s(µ) − sN (µ)| ≤ ∆s := αLB (µ) du = −a(u − uN , udu ) + f(udu N ) −a(uN , uN ) a(u,udu N ) Then |s − sN | 16.9.2013 du = −a(u − uN , udu − udu N ) =: −a(e, e ) du ≤ |a(e, e )| = |r(e )| ≤ rX e du du ≤ rX ∆u ≤ rX r X /αLB du du B. Haasdonk, RB-Tutorial 104 Institut für Angewandte Analysis und Numerische Simulation Primal-Dual RB Approach Remark: Squared Effect We see the desired „squared“ effect by the product of the residual norms. Remark: No Effectivity for Output Error Bound ∆s Without further assumptions, one cannot get output effectivity bounds for ∆s , as s − sN may be zero, while ∆s = / 0 , hence the quotient is not well defined. du Example: Choose vl ⊥ vf ∈ X, XN = XN ⊥ {vf , vl } a(u, v) := u, v , then u = vf , udu = vl , f(v) := vf , v , uN = 0, udu N =0 e = vf , edu = vl ⇒ r = / 0, rdu = / 0 but s − sN = −a(e, edu ) = vf , vl = 0 16.9.2013 l(v) := − vl , v ⇒ ∆s = / 0 Reminder: „compliant“ case gave output effectivity bounds B. Haasdonk, RB-Tutorial 105 Institut für Angewandte Analysis und Numerische Simulation Primal-Dual RB Approach Remark: Dual Problem is Redundant for Compliant Case For the compliant case, we claimed 2 r(·; µ) X 0 ≤ s(µ) − sN (µ) ≤ ∆s (µ) := αLB (µ) The right ineq. is exactlya consequence of the primal-dual error bound, as rX = rdu X and sN = sN : With l = f and symmetry we obtain u = −udu , uN = −udu N and therefore r = −rdu ⇒ rX = rdu X Further, r(udu N ) = −r(uN ) = 0 ⇒ sN = sN The left ineq. Follows by coercivity: s − sN = s − sN = −a(e, edu ) = a(e, e) ≥ 0 16.9.2013 The primal-dual approach only can lead to improvements in the non-compliant case, otherwise the simple primal approach is sufficient. B. Haasdonk, RB-Tutorial 106 Institut für Angewandte Analysis und Numerische Simulation Primal-Dual RB Approach Remark: Output Effectivity Bound for Compliant Case For the compliant case we claimed γUB (µ) γ̄ ∆ s (µ) ηs (µ) := ≤ ≤ s(µ) − sN (µ) αLB (µ) ᾱ Proof: Cauchy-Schwarz and norm equivalence: vr 2 = vr , vr = r(vr ) = a(e, vr ) = e, vr µ ≤ eµ vr µ ≤ eµ ⇒ rX = vr ≤ eµ √ γUB vr √ γUB Then we conclude using definitions ηs = 16.9.2013 ∆s = s − sN r2X r2X γUB e2µ /αLB γ̄ ≤ ≤ = a(e, e) ᾱ αLB e2µ αLB e2µ B. Haasdonk, RB-Tutorial 107 Institut für Angewandte Analysis und Numerische Simulation Primal-Dual RB Approach Remarks: Offline/Online, Basis Generation 16.9.2013 Offline/online procedure analogous to primal problem Use of error estimation for basis generation: du Run separate greedy procedures for XN , XN using ∆u , ∆du u with the same tolerance. Then the maximal primal and dual residuals will have similar order, indeed leading to a „squared“ effect in the output error estimator ∆s Alternative is a combined generation of primal and dual space: Run a greedy with the error bound ∆s and enrich both spaces simultaneously with corresponding snapshots of currently worst parameter. B. Haasdonk, RB-Tutorial 108 Conclusion/Extensions Institut für Angewandte Analysis und Numerische Simulation Conclusion/Extensions Summary: Extensions within this Summerschool: RB for Linear coercive problems: Methodology, Analysis, Error-control, Basis-Generation, Offline/Online, Software Noncoercive Problems: Lecture G. Rozza Systems, Flow Problems, Geometry: Lecture G. Rozza Nonlinear Problems: Lecture by M. Grepl Time-dependent Problems: Lectures M. Grepl, S. Volkwein Optimization, Opt. Control : Lectures M. Grepl, S. Volkwein Further Extensions that could not be addressed here: 16.9.2013 Domain Decomposition, Multiphysics, Multiscale Approaches, Stochastic Problems ⇒ augustine.mit.edu, www.morepas.org B. Haasdonk, RB-Tutorial 110 Institut für Angewandte Analysis und Numerische Simulation References [An05] A.C. Antoulas: Approximation of Large-scale Dynamical Systems, SIAM, 2005. [BCDDPW10] P. Binev, A. Cohen, W. Dahmen, R. DeVore, G. Petrova, P. Wojtazczyk. Convergence rates for greedy algorithms in reduced basis methods. IGPM report, RWTH Aachen, 310, 2010. [BMPPT09] A. Buffa, Y. Maday, A.T. Patera, C. Prud‘homme, G. Turinici. A priori convergence of the greedy algorithm for the parametrized reduced basis. M2AN, submitted 2009. [DH13] M. Dihlmann, B. Haasdonk: Certified PDE-constrained parameter optimization using reduced basis surrogate models for evolution problems, SimTech Preprint, University of Stuttgart, 2013. [DH13b] M. Dihlmann, B. Haasdonk: Certified Nonlinear Parameter Optimization with Reduced Basis Surrogate Models, SimTech Preprint, University of Stuttgart, 2013. [EPR10] J.L. Eftang, A.T. Patera, and E.M. Rønquist: An "hp" Certified Reduced Basis Method for Parametrized Elliptic Partial Differential Equations. SIAM Journal on Scientific Computing, 32(6):3170-3200, 2010. [FR83] J.P. Fink and W.C. Rheinboldt. On the error behaviour of the reduced basis technique for nonlinear finite element approximations. ZAMM, 63:21-28, 1983. [Ha11] B. Haasdonk: Reduzierte-Basis-Methoden, Vorlesungsskript SS 2011, IANSReport 4/11, University of Stuttgart, 2011. [HDO11] Haasdonk, B.; Dihlmann, M. & Ohlberger, M.: A Training Set and Multiple Basis Generation Approach for Parametrized Model Reduction Based on Adaptive Grids in Parameter Space, Mathematical and Computer Modelling of Dynamical Systems, 2011, 17, 423-442. 16.9.2013 B. Haasdonk, RB-Tutorial 111 Institut für Angewandte Analysis und Numerische Simulation References [HSZ13] J.S. Hesthaven, B. Stamm, S. Zhang: Eficient Greedy Algorithms for High-Dimensional Parameter Spaces with Applications to Empirical Interpolation and Reduced Basis Methods, M2AN, Accepted, 2013. [Ho33] H. Hotelling: Analysis of a Complex of Statistical Variables with Principal Components. Journal of Educational Psychology, 1933. [HRSP07] D.B.P. Huynh, G. Rozza, S. Sen, and A.T. Patera, A Successive Constraint Linear Optimization Method for Lower Bounds of Parametric Coercivity and Inf-Sup Stability Constants. CR Acad Sci Paris Series I 345:473–478, 2007. [Jo02] I.T. Joliffe: Principal Component Analysis. Springer Series in Statistics, 2002. [MPT02] Y. Maday, A.T. Patera, G. Turinici: A-priori convergence theory for reduced basis approximations of single-parameter symmetric coercive elliptic partial differential equations. C.R. Acad. Sci. Paris, Der. I, 335:289-294, 2002. [PR06] A.T. Patera and G. Rozza: „Reduced Basis Approximation and A Posteriori Error Estimation for Parametrized Partial Differential Equations, Version 1.0, Copyright MIT 2006, to appear in (tentative rubric) MIT Pappalardo Graduate Monographs in Mechanical Engineering. [Se08] S. Sen, Reduced-Basis Approximation and A Posteriori Error Estimation for ManyParameter Heat Conduction Problems. Numerical Heat Transfer, Part B: Fundamentals 54(5):369–389, 2008. [VPRP03] K. Veroy, C. Prud‘homme, D.V. Rovas, A.T. Patera. A-posteriori error bounds for reduced basis approximation of parametrized noncoercive and nonlinear elliptic partial differential equations. In Proc. 16th AIAA computational fluid conference, 2003, paper 2003-3847 [Vo13] S. Volkwein: Proper Orthogonal Decomposition: Theory and Reduced-Order Modelling. Univerity of Constance, 2013. [UVZ12] K. Urban, S. Volkwein, O. Zeeb: Greedy Sampling Using Nonlinear Optimization. Preprint, University of Ulm, 2012. 16.9.2013 B. Haasdonk, RB-Tutorial 112 Institut für Angewandte Analysis und Numerische Simulation Computer Exercises 1. Installation Install and start RBmatlab according to instructions given in class 2. Reproduction of scripts/rb_tutorial.m Reproduce the results of scripts/rb_tutorial.m, steps 1 to 8 some lines are missing in the file scripts/rb_tutorial_buggy.m, which must be filled. 3. Own Experiments: Create new steps 9, 10: Take Basis from step 4. Run full and reduced simulation for mu=(1,0.2,0.2,0.2) and mu=(1.0,0.1,1.0,0.1), determine output s, s_N, errors s-s_N, |u-u_N|, estimators Delta_u, Delta_s. Explain the results with the theoretical findings. Run the greedy procedure (with error-estimator as criterion) for increasing block numbers, i.e. (B1,B2) = (2,2),(3,3),(4,4) and plot the resulting training error estimator decays, how do they compare? (Hint: detailed_data.RB_info.max_err_est_sequence contains error indicator sequence after greedy) 16.9.2013 B. Haasdonk, RB-Tutorial 113