Inequalities of the form
and more generally, for ,
are called Hilbert’s inequalities (Hardy-Hilbert Inequalities).
This can be viewed in terms of matrices/operators. Let be defined as
We are looking for bounds of the form . By Cauchy-Schwarz, this is equivalent to having
Thus we are trying show the operator is bounded and the best constant is the operator norm.
We start with the case where the kernel
Theorem 1 :
The above inequality holds with and no smaller constant. It was first proved by Hermann Weyl but with , the best constant was later obtained by Schur.
More generally we have the integral version:
This result can be generalized to inequality
Proofs:
Toeplitz’s method:
Write in terms of exponentials (Fourier series) using
Now
Now applying Cauchy-Schwarz we get
where the step is Parseval- just open the square and integrate- only diagonal terms remain, non-diagonal terms integrate to zero.
Tightness: How do we know this is optimal?
Consider and see what happens as .
We can see that the LHS of the inequality is
and RHS is therefore has to be at least .
We crucially used the features when we executed Cauchy-Schwarz and Parseval. How do we prove the version?
Method of Compensating Difficulties:
To start of we think of the LHS as sum over of the quantities and
Applying Cauchy-Schwarz we get
But these sums are not convergent. So we multiply the factor to the first sequence and a compensate it by multiplying the second factor with
That is we apply Cauch-Schwarz to and
to get
Now
We have
Therefore we get
The integral converges for and evaluates to
And at , we have the best choice which is .
This method allows us to get do better- for instance if the sum is restricted to . Note that the
by
Take – this is where we got the best constant.
So we have the improved bound
.
So if the sequences are restricted to , we get the improved bound
.
We can carryout all of the above with : Instead of Cauchy-Schwarz we apply Holder with
and
We get
We bound
So we have the bound
The best constant is obtained when
In this case we get
Max version:
4 is the best constant here.
Homogenous Kernels:
Homogenous of degree
The integral version is
is the best possible constant a non-negative kernel homogenous of degree .
For the discrete case, we cannot apply some of the rescaling steps in the above argument but if
and are both non-negative decreasing functions, we have
by Cauchy-Schwarz we get
Now
and by the monotonocity we have
Therefore we get
with the constant
Note that the above proof contains/matches the proof by compensating difficulties in the case of because
is non-negative and homogenous of degree .
The following can be obtained by the same ideas.
because
because
The max version
because
Under the same assumptions, we can get versions:
where the constant is
Harder Hilbert Inequality:
Using the Fourier formula for (Toeplitz method), we see that LHS is bounded by
which by Cauch-Schwarz is bounded by
Note that we are bounding by the constant on the interval . If we also include the variation of this function (and the fact that most of the mass comes from ), you save a little from the variation of near and get
We derive the above estimate with using some auxillary functions/operators.
For , we see that operator norm is exactly
In fact the eigenvalues are . Therefore the operator norm , the size of the largest eigenvalue is .
To verify that just observe that are the eigenvectors with eigenvalues
Thus we have
(In fact given , applying this inequality for large with , and taking the limit we recover
because
But we want to get a better constant than $late x \pi$ for this finite version.
Now we write
where we used the Fourier representation for ,
Thus we need to bound
Applying the bound for operator norm with the vectors and we get
So we see that the operator norm is at most . In fact, it can be show that norm is
(Look at Finite sections of some classical inequalities by Wilf). More generally the eigenvalues are given by
When , we have