# Hiptmair–Xu preconditioner

In mathematics, Hiptmair–Xu (HX) preconditioners[1] are preconditioners for solving

${displaystyle H(operatorname {curl} )}$

and

${displaystyle H(operatorname {div} )}$

problems based on the auxiliary space preconditioning framework.[2] An important ingredient in the derivation of HX preconditioners in two and three dimensions is the so-called regular decomposition, which decomposes a Sobolev space function into a component of higher regularity and a scalar or vector potential. The key to the success of HX preconditioners is the discrete version of this decomposition, which is also known as HX decomposition. The discrete decomposition decomposes a discrete Sobolev space function into a discrete component of higher regularity, a discrete scale or vector potential, and a high-frequency component.

HX preconditioners have been used for accelerating a wide variety of solution techniques, thanks to their highly scalable parallel implementations, and are known as AMS[3] and ADS[4] precondition. HX preconditioner was identified by the U.S. Department of Energy as one of the top ten breakthroughs in computational science[5] in recent years. Researchers from Sandia, Los Alamos, and Lawrence Livermore National Labs use this algorithm for modeling fusion with magnetohydrodynamic equations.[6] Moreover, this approach will also be instrumental in developing optimal iterative methods in structural mechanics, electrodynamics, and modeling of complex flows.

## . . . Hiptmair–Xu preconditioner . . .

Consider the following

${displaystyle H(operatorname {curl} )}$

problem: Find

${displaystyle uin H_{h}(operatorname {curl} )}$

such that

${displaystyle (operatorname {curl} ~u,operatorname {curl} ~v)+tau (u,v)=(f,v),quad forall vin H_{h}(operatorname {curl} ),}$

with

${displaystyle tau >0}$

.

The corresponding matrix form is

${displaystyle A_{operatorname {curl} }u=f.}$

The HX preconditioner for

${displaystyle H(operatorname {curl} )}$

problem is defined as

${displaystyle B_{operatorname {curl} }=S_{operatorname {curl} }+Pi _{h}^{operatorname {curl} },A_{vgrad}^{-1},(Pi _{h}^{operatorname {curl} })^{T}+operatorname {grad} ,A_{operatorname {grad} }^{-1},(operatorname {grad} )^{T},}$

where

${displaystyle S_{operatorname {curl} }}$

is a smoother (e.g., Jacobi smoother, Gauss–Seidel smoother),

${displaystyle Pi _{h}^{operatorname {curl} }}$

is the canonical interpolation operator for

${displaystyle H_{h}(operatorname {curl} )}$

space,

${displaystyle A_{vgrad}}$

is the matrix representation of discrete vector Laplacian defined on

${displaystyle [H_{h}(operatorname {grad} )]^{n}}$

,

${displaystyle grad}$

is the discrete gradient operator, and

${displaystyle A_{operatorname {grad} }}$

is the matrix representation of the discrete scalar Laplacian defined on

${displaystyle H_{h}(operatorname {grad} )}$

. Based on auxiliary space preconditioning framework, one can show that

${displaystyle kappa (B_{operatorname {curl} }A_{operatorname {curl} })leq C,}$

where

${displaystyle kappa (A)}$

denotes the condition number of matrix

${displaystyle A}$

.

In practice, inverting

${displaystyle A_{vgrad}}$

and

${displaystyle A_{grad}}$

might be expensive, especially for large scale problems. Therefore, we can replace their inversion by spectrally equivalent approximations,

${displaystyle B_{vgrad}}$

and

${displaystyle B_{operatorname {grad} }}$

, respectively. And the HX preconditioner for

${displaystyle H(operatorname {curl} )}$

becomes

${displaystyle B_{operatorname {curl} }=S_{operatorname {curl} }+Pi _{h}^{operatorname {curl} },B_{vgrad},(Pi _{h}^{operatorname {curl} })^{T}+operatorname {grad} B_{operatorname {grad} }(operatorname {grad} )^{T}.}$