Section 2.4: Gradient Estimates
2.4 Gradient Estimates
The basic idea in the treatment of gradient estimates, due to Bernstein, involves
dierentiation of the equation with respect to x
k
, k = 1, . . . , n, followed by multiplication
by D
k
u and summation over k. The maximum principle is then applied to the resulting
equation in the function v = |Du|
2
, possibly with some modication. There are two
kinds of gradient estimates, global gradient estimates and interior gradient estimates.
We will use semilinear equations to illustrate the idea.
Suppose Ω is a bounded and connected domain in R
n
. Consider the equation
a
ij
(x)D
ij
u + b
i
(x)D
i
u = f(x, u) in Ω
for u ∈ C
2
(Ω) and f ∈ C(Ω × R). We always assume that a
ij
and b
i
are continuous
and hence bounded in Ω and that the equation is uniformly elliptic in Ω in the following
sense:
a
ij
(x)ξ
i
ξ
j
≥ λ|ξ|
2
for any x ∈ Ω and any ξ ∈ R
n
for some positive constant λ.
Proposition 2.18. Suppose u ∈ C
3
(Ω) ∩ C
1
(Ω) satises
n
a
ij
(x)D
ij
u + b
i
(x)D
i
u = f(x, u) in Ω
(2.1)
for a
ij
, b
i
∈ C
1
(Ω) and f ∈ C
1
(Ω × R). Then there holds
sup
Ω
|Du| ≤ sup
∂Ω
|Du| + C
where C is a positive constant depending only on λ, diam(Ω), ka
ij
, b
i
k
C
1
(Ω)
, M =
kuk
L
∞
(Ω)
, and kf k
C
1
(Ω×[−M,M ])
.
Proof. Set L ≡ a
ij
D
ij
+ b
i
D
i
. We calculate L( |Du|
2
) rst. Note
D
i
(|Du|
2
) = 2D
k
uD
ki
u,
D
ij
(|Du|
2
) = 2D
ki
uD
kj
u + 2D
k
uD
kij
u.
Dierentiating (2.1) with respect to x
k
, multiplying by D
k
u, and summing over k, we
have by (2.4)
a
ij
D
ij
(|Du|
2
) + b
i
D
i
(|Du|
2
)
= 2a
ij
D
ki
uD
kj
u − 2D
k
a
ij
D
k
uD
ij
u
− 2D
k
b
i
D
k
uD
i
u + 2D
x
f|Du|
2
+ 2D
k
fD
k
u.
43
Chapter 2: Maximum Principles
The ellipticity assumption implies
X
i,j,k
a
ij
D
ki
uD
kj
u ≥ λ|D
2
u|
2
.
By the Cauchy inequality, we have
L(|Du|
2
) ≥ λ|D
2
u|
2
− C|Du|
2
− C
with C a positive constant depending only on kf k
C
1
(Ω×[−M,M ])
, ka
ij
, b
i
k
C
1
(Ω)
, and λ.
We need to add another term u
2
. We have by the ellipticity assumption
L(u
2
) = 2a
ij
D
i
uD
j
u + 2u{a
ij
D
ij
u + b
i
D
i
u}
≥ 2λ|Du|
2
+ 2uf.
Therefore we obtain
L(|Du|
2
+ αu
2
) ≥ λ|D
2
u|
2
+ (2λα − C)|Du|
2
− C
≥ λ|D
2
u|
2
+ |Du|
2
− C
if we choose α > 0 large, with C depending in addition on M. In order to control the
constant term we may consider another function e
βx
1
for β > 0. Hence we get
L(|Du|
2
+ αu
2
+ e
βx
1
) ≥ λ|D
2
u|
2
+ |Du|
2
+ {β
2
a
11
e
βx
1
+ βb
1
e
βx
1
− C}.
If we put the domain Ω ⊂ {x
1
> 0}, then e
βx
1
≥ 1 for any x ∈ Ω. By choosing β large,
we may make the last term positive. Therefore, if we set w = |Du|
2
+α|u|
2
+e
βx
1
for large
α, β depending only on λ, diam(Ω), ka
ij
, b
i
k
C
1
(Ω)
, M = kuk
L
∞
(Ω)
, and kfk
C
1
(Ω×[−M,M ])
,
then we obtain
Lw ≥ 0 in Ω.
By the maximum principle we have
sup
Ω
w ≤ sup
∂Ω
w.
This nishes the proof.
Similarly, we can discuss the interior gradient bound. In this case, we just require
the bound of sup
Ω
|u|.
44
Section 2.4: Gradient Estimates
Proposition 2.19. Suppose u ∈ C
3
(Ω) satises
a
ij
(x)D
ij
u + b
i
(x)D
i
u = f(x, u) in Ω
for a
ij
, b
i
∈ C
1
(Ω) and f ∈ C
1
(Ω ×R). Then there holds for any compact subset Ω
′
⋐ Ω
sup
Ω
′
|Du| ≤ C
where C is a positive constant that depends only on λ, diam(Ω), dist(Ω
′
, ∂Ω), ka
ij
, b
i
k
C
1
(Ω)
,
M = kuk
L
∞
(Ω)
, and kf k
C
1
(Ω×[−M,M ])
.
Proof. We need to take a cuto function γ ∈ C
∞
0
(Ω) with γ ≥ 0 and consider the
auxiliary function with the following form:
w = γ|Du|
2
+ α|u|
2
+ e
βx
1
.
Set v = γ|Du|
2
. Then we have for operator L dened as before
Lv = (Lγ)|Du|
2
+ γL(|Du|
2
) + 2a
ij
D
i
γD
j
|Du|
2
.
Recall an inequality in the proof of Proposition 2.18,
L(|Du|
2
) ≥ λ|D
2
u|
2
− C|Du|
2
− C.
Hence we have
Lv ≥ λγ|D
2
u|
2
+ 2a
ij
D
k
uD
i
γD
kj
u − C|Du|
2
+ (Lγ)|Du|
2
− C.
The Cauchy inequality implies for any ε > 0
|2a
ij
D
k
uD
i
γD
kj
u| ≤ ε|Dγ|
2
|D
2
u|
2
+ c(ε)|Du|
2
.
For the cuto function γ, we require that
|Dγ|
2
≤ Cγ in Ω.
Therefore we have by taking ε > 0 small
Lv ≥ λγ|D
2
u|
2
1 − ε
|Dγ|
2
γ
− C|Du|
2
− C
≥
1
2
λγ|D
2
u|
2
− C|Du|
2
− C.
Now we may proceed as before.
45
Chapter 2: Maximum Principles
In the rest of this section we use barrier functions to derive the boundary gradient
estimates. We need to assume that the domain Ω satises the uniform exterior sphere
property.
Proposition 2.20. Suppose u ∈ C
2
(Ω) ∩ C(Ω) satises
a
ij
(x)D
ij
u + b
i
(x)D
i
u = f(x, u) in Ω
for a
ij
, b
i
∈ C(Ω) and f ∈ C(Ω × R). Then there holds
|u(x) − u(x
0
)| ≤ C|x − x
0
| for any x ∈ Ω and x
0
∈ ∂Ω
where C is a positive constant depending only on λ, Ω, ka
ij
, b
i
k
L
∞
(Ω)
, M = kuk
L
∞
(Ω)
,
kfk
L
∞
(Ω×[−M,M ])
, and kφk
C
2
(Ω)
for some φ ∈ C
2
(Ω) with φ = u on ∂Ω.
Proof. For simplicity we assume u = 0 on ∂Ω. As before, set L = a
ij
D
ij
+ b
i
D
i
. Then
we have
L(±u) = ±f ≥ −F in Ω
where we denote F = sup
Ω
|f(·, u)|. Now x x
0
∈ ∂Ω. We will construct a function w
such that
Lw ≤ −F in Ω, w(x
0
) = 0, w|
∂Ω
≥ 0.
Then by the maximum principle we have
−w ≤ u ≤ w in Ω.
Taking the normal derivative at x
0
, we have
∂u
∂n
(x
0
)
≤
∂w
∂n
(x
0
).
So we need to bound
∂w
∂n
(x
0
) independently of x
0
.
Consider the exterior ball B
R
(y) with B
R
(y) ∩Ω = {x
0
}. Dene d(x) as the distance
from x to ∂B
R
(y). Then we have
0 < d(x) < D ≡ diam(Ω) for any x ∈ Ω.
In fact, d(x) = |x − y| − R for any x ∈ Ω. Consider w = ψ(d) for some function ψ
dened in [0, ∞). Then we need
ψ(0) = 0 ( =⇒ w(x
0
) = 0)
ψ(d) > 0 for d > 0 ( =⇒ w|
∂Ω
≥ 0)
46
Section 2.4: Gradient Estimates
ψ
′
(0) is controlled.
From the rst two inequalities, it is natural to require that ψ
′
(d) > 0. Note
Lw = ψ
′′
a
ij
D
i
dD
j
d + ψ
′
a
ij
D
ij
d + ψ
′
b
i
D
i
d.
Direct calculation yields
D
i
d(x) =
x
i
− y
i
|x − y|
,
D
ij
d(x) =
δ
ij
|x − y|
−
(x
i
− y
i
)(x
j
− y
j
)
|x − y|
3
,
which imply |Dd| = 1 and with Λ = sup | a
ij
|
a
ij
D
ij
d =
a
ii
|x − y|
−
a
ij
|x − y|
D
i
dD
j
d
≤
nΛ
|x − y|
−
λ
|x − y|
=
nΛ − λ
|x − y|
≤
nΛ − λ
R
.
Therefore we obtain by ellipticity
Lw ≤ ψ
′′
a
ij
D
i
dD
j
d + ψ
′
nΛ − λ
R
+ |b|
≤ λψ
′′
+ ψ
′
nΛ − λ
R
+ |b|
if we require ψ
′′
< 0. Hence in order to have Lw ≤ −F we need
λψ
′′
+ ψ
′
nΛ − λ
R
+ |b|
+ F ≤ 0.
To this end, we study the equation for some positive constants a and b
ψ
′′
+ aψ
′
+ b = 0
whose solution is given by
ψ(d) = −
b
a
d +
C
1
a
−
C
2
a
e
−ad
for some constants C
1
and C
2
. For ψ(0) = 0, we need C
1
= C
2
. Hence we have for some
constant C
ψ(d) = −
b
a
d +
C
a
(1 − e
−ad
),
47
Chapter 2: Maximum Principles
which implies
ψ
′
(d) = Ce
−ad
−
b
a
= e
−ad
C −
b
a
e
ad
ψ
′′
(d) = −Cae
−ad
.
In order to have ψ
′
(d) > 0, we need C ≥
b
a
e
aD
. Since ψ
′
(d) > 0 for d > 0, so
ψ(d) > ψ(0) = 0 for any d > 0. Therefore we take
ψ(d) = −
b
a
d +
b
a
2
e
aD
(1 − e
−ad
)
=
b
a
1
a
e
aD
(1 − e
−ad
) − d
.
Such ψ satises all the requirements we imposed. This nishes the proof.
48