This is a sequel about kernel of inner products.
Recall the original problem:
Q1. Consider two inner products $\langle \cdot , \cdot \rangle _1$ and $\langle \cdot , \cdot \rangle _2$. If $\langle x, y\rangle _1 = 0$ iff $\langle x, y\rangle _2 = 0$, show that $\langle \cdot , \cdot \rangle _1 = c \langle \cdot , \cdot \rangle _2$ for some $c>0$.
Two extended problems were posted but not clearly solved that time, so I would like to address further.
I have to admit that I was a bit silly when I looked for help in the realm of algebraic geometry, when there are big brothers around: matrix algebra and functional analysis. There are actually familiar tools and it turns out that this is not a question that is completely new, but rather something that is a generalization of existing result.
Q2. Consider the matrix form of Q1: suppose $A, B$ are real, symmetric and postive definite matrices. Suppose $x^tAy = 0$ iff $x^tBy = 0$ for any real vector $x, y$. Prove that $A = cB$ for some $c > 0$.
Last time we mentioned a contraposition by tracking the matrix entries. This is painful but possibly working, although there are more elegant solutions using matrix algebra as below.
First note that symmetric positive-definite means it is invertible with a symmetric square root which we denote $B^{1/2}$ coming with an inverse (also symmetric) $B^{-1/2}$. We then define $C = B^{-1/2}AB^{-1/2}$. It looks very similar to the term $B^{-1}A$ and it actually does in the sense that $A = cB$ iff $C = cI$. The way we split $B^{-1}$ is actually a not so rare trick and even made its appearance in my thesis (in forms of constructed operators). The idea is to allow us to swap in the inner product.
Given $x,y$, define $u=B^{1/2}x$ and $v=B^{1/2}y$. This is just a transformation and is no big deal since $B$ is invertible. Now suppose $\langle u,Cv \rangle =0$. Expanding gives $\langle u,Cv \rangle =\langle B^{1/2}x, B^{-1/2}Ay \rangle = 0$.
Since the square root is real symmetric, we can swap that to the other side and retrieve $\langle x, Ay \rangle =0=\langle x,By \rangle$ which is the original form. Furthermore, putting the square root back and we recover $0 = \langle B^{1/2}x, B^{-1/2}By \rangle = \langle u,v \rangle$. That is, $\langle u,v \rangle =0$ iff $\langle u,Cv \rangle =0$. This is a very strong characterization!
Since $C$ is symmetric positive-definite, it has full real positive eigenvalue and eigenspace. It suffices to prove that it has a single eigenvalue with full multiplicity.
Suppose $C$ has eigenpairs $(\lambda_i, e_i)$ where $(e_i)$ form an orthonormal basis. For $i\neq j$ set $u = e_i+e_j$ and $v = e_i - e_j$, then $\langle u,v \rangle =0$. Therefore $\langle u,Cv \rangle = \lambda_i - \lambda_j =0$ which proves the claim.
This is simply another wonderful example demonstrating the power of utilizing matrix square root as an introduction to square roots of operators. When we teach about (infinitely differentiable) functions acting on matrices using power series we often apply fancy functions like exponential functions or trig functions. Among those the square root is a truly underestimated tool that is hugely influential in problems like this even before going into functional analysis.
*
Speaking of functional analysis, do you still remember the next part of the question?
We know positive-definite matrices is strong enough to allow such properties. On the other side definiteness does seem too strong after all. For example, this is definitely true across all low dimensional matrices as we mentioned last time. i.e., let $A, B \in \mathbb{R}^{2\times 2}$ be symmetric, then $\langle x, Ay \rangle =0$ iff $\langle x, By \rangle =0$ for all $x,y$ implies $A = cB$ for some constant $c$.
But now what if I say this is also true for all symmtric bilinear forms? Let us recall a very similar result on linear forms:
Lemma 1. if $f,g \in V^*$ are two linear forms then $\ker f = \ker g$ iff $f =cg$ for some constant $c$.
The proof is similar to what we did last time. One of the direction is obvious so now we assume that $\ker f = \ker g$. Consider two linearly independent elements $x,y \in V$ such that $f(x), f(y) \neq 0$, then $g(x), g(y) \neq 0$ as well. (In case the cokernal is of rank 1 or lower the statement is trivial.) We can then write $f(x) = c_xg(x)$ and $f(y) = c_yg(y)$.
Just like our original proof we can find $\alpha$ such that $f(x-\alpha y) = 0 = g(x - \alpha y)$. In fact we know $\alpha = f(x)/f(y)$ upon substitution. From here we calculate
$0 = g(x-\alpha y) = g(x) - (f(x)/f(y)g(y)$
$= g(x) - (c_xg(x)/c_yg(y))g(y) = g(x) - (c_x/c_y)g(x)$.
Since $g(x)$ is non-zero we conclude that $c_x = c_y$.
This is such a simplified version of a tiny result in Rudin's book...I should have realized earlier!
We then notice that our problem is just the multilinear version of the above:
Q3. Consider two symmetric bilinear forms $a, b: V\times V\to \mathbb{R}$. If the kernel of $a$ and $b$ are equal then $a = cb$ for some non-zero constant $c$.
Traditionally kernel is only applicable on linear functions but allow me to abuse that term and proceed...
WLOG we assume that the bilinear forms are non-zero. With the above linear version in mind, all we need is to extend that to both dimensions. Fix a $y \in V$, denote $a(\cdot, y), b(\cdot, y)$ to be $f_y, g_y$ respectively, then $f_y, g_y \in V^*$ are functionals with equal kernel. By Lemma 1 there exists constant $c_y$ such that $f_y = c_yg_y$ for all $y\in V$. We can define $c_y$ for every $y\in V$ where $f_y$ is non-trivial. Since the bilinear forms are non-zero we can define $c_y$ for at least some $y\in V$. The aim is to prove that $c_y$ is consistent across $V$.
Case 1: consider the case where the dimension of $\left\{ g_y \right\}$ is 1.
Since the dimension is 1, the space is spanned by a single functional, say $g$. Then we have $f_y = \alpha (y) g$ and $g_y = \beta (y) g$. For some functions $\alpha, \beta: V\to \mathbb{R}$. It turns out that $\alpha, \beta$ are linear using linearity of the bilinear forms.
Since the kernel of $f_y, g_y$ are equal over $y$ (we have to be careful on which variable are we talking about), we know that $\alpha (y) = 0$ iff $\beta (y) = 0$. This is because $g \neq 0$ or else the bilinear form is trivial. Lemma 1 tells us that $\alpha = c\beta$ for some constant $c$. That is, $f_y = cg_y$ for that constant.
Case 2: consider the case where $\left\{ g_y \right\}$ is more than 1. Then we consider a basis (hello axiom of choice!) $\left\{ g_{y_i} \right\}$ and show that $c_{y_i} = c_{y_j}$ for any $i\neq j$ by linearity:
$f_{y_i+y_j} = f_{y_i} + f_{y_j} = c_{y_i}g_{y_i} + c_{y_j}g_{y_j}$
$f_{y_i+y_j} = c_{y_i+y_j}g_{y_i+y_j} = c_{y_i+y_j}(g_{y_i}+g_{y_j})$
Taking difference gives
$(c_{y_i} - c_{y_i+y_j})g_{y_i} + (c_{y_j}-c_{y_i+y_j})g_{y_j} = 0$,
which gives $c_{y_i} = c_{y_i+y_j} = c_{y_j}$ by independence.
That says, $f_{y_i} = cg_{y_i}$ for every basis vector $g_{y_i}$, and that extends to every $g_y$ by linear combination. $\square$
*
Wait...did we even use symmetry?
Theorem 1. Two bilinear forms shares the same kernel iff they are a non-zero constant multiple of each other.
That's it. Perhaps the most possibly that we can achieve. It is just like showing something being $C^1$ in a space with $C^1$ boundary...you can't ask for more.
Should we find out that there this is not true -- then certainly there are a lot more to ask. For example, we can discuss the maximal (why does it exist?) subspace of $Bil(V)$ so that the property holds.
Such subspace certainly contains more than just symmetric bilinear forms. For example one can show that alternating forms do satisfy such property as well. Is the maximal subset something non-trivial between subspace of symmetric forms and the whole form space? Can we characterize such space, is it anything special with properties?...
But no, we got the perfect answer.
On the other hand, it doesn't seem like to much of a surprise once we quote the lemma. This is just a natural extension of the linear case. There are also other deeper reasons why that should be the case as well. For one, we quote a fundamental result in algebraic geometry, the Nullstellensatz. In a simple application, polynomials with equal zero sets should generate the same ideal. Linear forms gives linear polynomial and bilinear forms are simply quadratic ones. Of course we require the field to be algebraically closed in order to apply Nullstellensatz (which is not necessarily true here), but the spirit is the same.
There are still a lot we can look into from here on, especially the geometry in it:
- The kernel equality is an equivalence relation that partitions the form space into subsets of forms where the zero sets is the same. The result is saying that these partitions divides the bilinear form space into rank 1 fibers.
- Elements in the quotient space are not subspaces (since zero is absent), but it can be identified in a projective space. In fact, it is isomorphic to the projective space $\mathbb{P}^{n^2-1}$ when $\dim (V) = n$.
- We often define metric via bilinear (or higher order) forms. The result provides rigidity about orthogonality structure of the bilinear forms in the sense that there can only be one metric (up to multiples) that preserves a given orthogonality structure. And that brings us to conformal...*cough* I better not to talk about that before exposing myself too much (^.^)
Oh well, there goes our fun little problem and welcome to 2026. I can't promise I would try my best to bring up more interesting math discussions from here on!
PS: now with the full proof in mind, can you prove the same statement if written in matrix algebra language? Is it possible to write the proof without using the matrix version of the above proof (i.e. the use of null space etc.)? If that's too hard, can we do that for some special cases like when $A$ is invertible or symmetric?
