Tuomo Kuusi and I have just posted our paper “Renormalization group and elliptic homogenization in high contrast” to the arXiv.

The paper is about quantitative homogenization of the elliptic equation

(1)

where is a matrix-valued coefficient field. In our setup, we assume that satisfies a suitable ellipticity condition (more on that below) and that it is a stationary random field with certain decorrelation properties which I will be vague about in this post (it is ok for the reader to imagine a finite range of dependence).

On large scales, solutions will be close (with high probability) to those of a limiting deterministic, constant-coefficient equation. This limiting procedure is known as *homogenization*. Those of us working in the subfield of quantitative homogenization are interested in obtaining explicit error estimates for this limit. Essentially all results in this direction so far have concerned the asymptotic regime in which the *scale separation* (the ratio of the large scale to the correlation length scale of the field) is very large compared to the *ellipticity contrast ratio* (defined below). In this regime, very precise estimates are available, but they are rather implicit (or exhibit very bad dependence in) the ellipticity contrast ratio.

Essentially, we understand very precisely the convergence rate of the homogenization limit if the ellipticity ratio is fixed (say, less than 10) and the scale separation is sent to infinity.

Homogenization theory has a lot of potential applications to important problems in mathematical physics (see this for example). Most of these applications do not need super sharp asymptotic bounds on the homogenization error. But what they do exhibit—and this is what often thwarts the use of homogenization methods—is high contrast. Therefore, at least from this point of view, it is more important to quantify the scale at which the (expected) homogenization error is below 1% as a function of the ellipticity contrast ratio, than to understand the precise exponent in the scaling law for the homogenization error or the scaling limit of the first-order correctors.

We call this “the high contrast problem” in homogenization, and our opinion it is certainly the most important open problem in quantitative homogenization. And it is essentially completely open.

The point of our paper is to prove the first useful/nontrivial estimate for the high contrast homogenization problem. The methods we introduce in the paper can be seen as a rigorous renormalization group argument, and indeed they are the first quantitative homogenization techniques which are *renormalizable* in the sense that they can be iterated across an infinite number of scales.

One application of this work has already appeared: in joint work with Ahmed Bou Rabee and Tuomo, we proved a (quenched) superdiffusive CLT for a Brownian particle in the curl of a log-correlated Gaussian field. Several times in that work, we needed to use the high contrast results proved in the paper described in this blog post. (In fact, we needed to apply then infinitely many times inside of another renormalization group argument.)

Homogenization for high contrast fields and critical phenomena

Let’s first talk about ellipticity. While in the paper we allow for to be a very degenerate and/or unbounded, let us assume in this blog post that satisfies a uniform ellipticity condition, which we write in the form

(2)

where are positive constants satisfying . If we split into its symmetric part and ant-symmetric part , then the condition (2) is equivalent to the pair of matrix inequalities

(3)

Whenever we apply an elliptic estimate, such as an energy estimate like Caccioppoli’s inequality, we will see factors of (powers of) the ratio appearing on the right side of our inequality. This is the ellipticity contrast ratio.

Homogenization is a nonlinear averaging of the coefficient field on large scales, but this “averaging” is indeed very nonlinear and it is becoming more singular as the ellipticity ratio gets large. In the small contrast regime (for very close to one), the homogenization procedure becomes linear. So the ratio can be thought of as quantifying the degree of nonlinearity/singularity in the system.

To illustrate this point, let’s consider a basic example. Take a Poisson point process on with intensity and let be the random set obtained by taking the union of unit balls with centers at one of the Poisson points. Define the coefficient field , which has the physical interpretation of a conductivity of a random material. What should we expect the solutions of (1) to look like on large scales? If is very large, and is very small, then the connectedness of the set should determine the effective behavior of the system. If is connected on large scales (if it percolates), then we should expect the homogenized matrix (representing effective conductivity) to be large, which if is not connected, then it should be small.

Let us call the *length scale of homogenization, *which we can define (very vaguely in this blog post) as the random scale at which the homogenization error is below some arbitrary threshold , perhaps one percent. How large should we expect to be? If is large, then we should expect to be at least as large as the *correlation length* of the underlying percolation problem.

Now, tune the parameter so that it is equal to the percolation threshold for the set (any larger and would have an unbounded connected component with probability one, any smaller and it would have no unbounded components). In that case, the percolation problem is critical and the correlation length diverges. Therefore, we should also expect the homogenization length scale to diverge for this model as .

It is widely believed (but only proved in certain cases) that, for near , the correlation length should be of the order for a *critical exponent* . In analogy, we should expect, for the particular coefficient field above, the homogenization length scale to be typically of order for some exponent .

For a general coefficient field , the homogenization length scale will depend in a very complicated way on the particular geometry of the random field . But we may expect that embedding the percolation problem into our homogenization problem is probably the worst thing that can happen, so we should conjecture that a power-like upper bound for to be valid, in general.

We should expect that proving this will be very difficult. Indeed, power-like upper bounds on the correlation length are not known in percolation theory, except in very special cases. The best general upper bound for bond percolation on the discrete lattice can be found in this paper which says that

(4)

See also the very extensive answer of Tom Hutchcroft to my math overflow question about this. Note that for site percolation we know some things in certain cases, as he says, but for bond percolation we know nothing beyond (4) except in large dimension.

Our main result

When we set out to work on this problem with Tuomo, the bound we thought we would prove is something analogous to (4), exponential in a power of . We were hoping to make this power . Very much to our surprise we eventually succeeded in proving something much better.

Here is the rough, informal statement. (See Theorems A and B in the paper for precise versions.)

**Theorem ****(A. & Kuusi 2024)** *The homogenization length scale satisfies the upper bound*

(5)

* where the constant depends only on . *

While this bound is not as good as the conjectured power-like upper bound, it is “close” to power-like (or at least much closer than an exponential bound), since

(6)

So it’s a power-like bound with logarithmically diverging exponent.

There are situations, like the one encountered in our “superdiffusive” work with Ahmed mentioned above, in which the superpolynomial bound (5) is enough for what is needed, whereas an exponential bound would not be helpful.

On a subjective level, the ideas introduced in the paper to achieve (5) also have us feeling much more optimistic about the possibility of proving the conjectured power-like bound in the future. What once seemed hopeless impossible now seems slightly less so.

Most of the non-rigorous heuristic derivation of critical exponents in the physics literature are based on renormalization group arguments, in which the scale of the system is slowly dilated (zooming out) and the “flow” of “effective” parameters are studied as a function of the (logarithm of the) scale. RG arguments are notoriously difficult to make rigorous.

Our proof of above theorem is a rigorous RG argument. On a technical level, it has similarities to previous works of us and our collaborators in quantitative homogenization, which we also sometimes previously described as “renormalization” methods. The difference here is that our arguments are quite a bit more sophisticated to the point that it is no longer just an analogy: there is a literal renormalization group. We can consider the new coefficient field obtained by dilating the equation (zooming out) and observe that the effective parameters governing this new equation are changing as a function of the scale, and quantify the rate at which they are becoming better.

What are the effective parameters? The ellipticity contrast, of course. We define a *coarse-grained ellipticity condition*, which is defined analogously to (3) except that we substitute the *coarse-grained diffusion matrices* in place of the pointwise values of the field. The coarse-grained ellipticity contrast ratio is then defined in the obvious way.

These coarse-grained diffusion matrices are the key objects in our analysis, and these are the same objects which have appeared in previous works. The new realization is that they can be used to define ellipticity itself, and that a coarse-grained version of elliptic theory can be substituted in place of the usual one.

Homogenization is then nothing but the convergence, under the RG flow, of the coarse-grained ellipticity contrast to one. The key point in the paper is the derivation of a finite difference inequality which upper bounds the rate at which the effective ellipticity contrast converges to one, as a function of log(scale).

The reason why proving a theorem like this is hard is because the parameters in this finite difference inequality cannot depend on the ellipticity constants. Any dependence on ellipticity will be power-like, and any power-like dependence iterated across the scales results in exponential dependence. In this paper we manage to show that, if the “steps” in our finite difference inequality representing the gaps between two successive scales are logarithmically large in terms of the ellipticity contrast ratio (like to rather than to , where represents the log of the scale), then we are able to obtain such an ellipticity-free estimate.

Since our assumptions (coarse-grained ellipticity is bounded) are not stronger than our conclusions, we can iterate our analysis across an infinite number of scales, as is often required in RG arguments and like we have already demonstrated in the work with Ahmed.

The results in this paper were previously announced by Tuomo in November of last year at the workshop “New trends in homogenization” in Roscoff, France. It is quite a long paper which will take a while to digest, even for experts. Anyone attempting to do so should feel free to get in touch with us for any comments/questions.