An arithmetic method for finding the spectrum of QM systems based on the demand for unitarity
Back to homepage https://principiaphysicaegeneralis.com/
Introduction
The wave function \(\Psi\) in Quantum mechanics (or to be more specific the norm squared), is interpreted as a probability density function. Therefore, the most basic requirement imposed on it is that the total probability it represents is equal to unity (therefore Unitarity). $$ \int\Psi^*\Psi \; dV = 1 \;\;\; (2.1) $$ Where the integral is taken over all of space and the star denotes the complex conjugate. Another obvious requirement of any physical theory is that the measurable quantities take real (and not complex) values.
In this article we will see how far we can go by using these simple (and dare I say self-evident) facts. In specific, we are going to see a computational method of solving the Schrodinger equation called bootstrapping that is based on the requirement of Unitarity. Then we will see how the demand for reality of measured quantities leads into dealing with boundary conditions.
The Principles
We will begin by stating the most basic constraints that any solution to quantum mechanical problems must obey to be considered “physical”.
Unitarity and the Inner Product
The set of all possible wave functions live in a space called Hilbert space. Specifically, since we want our solutions to satisfy (2.1), our Hilbert space is only populated by “square-integrable” functions i.e.: $$ \int\phi^*\phi \; dV < \infty \;\;\;(2.2) $$ Following the formalism of the last article, we can define the inner product of Hilbert space as $$ <\phi,\psi> = \int\phi^*\psi \; dV \;\;\; (2.3) $$ and so a unitary functions are defined as $$ ||\phi||^2=<\phi,\phi> =1 \;\;\; (2.4) $$ by definition, for any non-zero function $$ ||\phi||^2 > 0 \;\;\;(2.5)$$ If \(||\phi||^2 < 0\) then we wouldn’t be able to modify \(\phi\) by a constant (try it) to get (2.1) and our function would not be physical.
Operator norm
For any operator \(\hat{O}\) we define the Hermitian conjugate operator \(\hat{O}^\dagger \) from the relation $$ <\hat{O}\phi,\psi>=<\phi,\hat{O}^\dagger\psi>\;\;\; (2.6) $$ and so we expect $$ ||\hat{O}\phi||^2 = <\hat{O}\phi,\hat{O}\phi> = <\phi,\hat{O}^\dagger \hat{O}\phi>\;\; > 0 \;\;\; (2.7) $$ from now on we will denote the average value of an operator \(\hat{O}\) as $$ <\hat{O}> = <\phi,\hat{O}\phi> \;\;\; (2.8) $$ and therefore, we expect $$ <\hat{O}^\dagger \hat{O}> \;\;\; > 0 \;\;\; (2.9) $$
Ehrenfest’s Theorem
The most general formulation of Ehrenfest’s theorem is that all of the equations that are true in classical mechanics also apply to the average values of quantum operators. The only difference being that Poisson’s brackets are replaced by the commutator (we assume that \(\hbar = 1\)) $$ \{A,B\} \longrightarrow i[\hat{A},\hat{B}] \;\;\; (2.10) $$ where \([\hat{A},\hat{B}] = \hat{A}\hat{B}-\hat{B}\hat{A}\) is the commutator between two operators. Therefore, the classical equation of time evolution is $$ \frac{dA}{dt} = \{H,A\} \longrightarrow \frac{d<\hat{O}>}{dt} = i<[\hat{H},\hat{O}]> \;\;\; (2.11) $$ where \(\hat{H}\) is the Hamiltonian operator. Here we have told a bit of a lie, but we will come back to correct it later.
Bootstrapping
From now on all quantities are operators so we will drop the hats.
The recursive relation
Let’s suppose the Hamiltonian: $$ H= p^2 + x^2 + gx^4 \;\;\;(3.1) $$ When in an eigenstate of the Hamiltonian, of energy \(E_n\), all of the average values are time-independant and therefore, from (2.11): $$ <[O,H]> = 0 \;\;\;(3.2) $$ This is going to be our “generating” equation from which we will get all other results. For \(O=x^k\) and \(O=px^k\) we get respectively $$ k(k-1)<x^{k-2}> = -2ik<x^{k-1}p> \;\;\; (3.3) $$ $$ 4k<x^{k-1}p^2>= 4<x^{k+1}> + 8g<x^{k+3}> - k(k-1)(k-2)<x^{k-3}> \;\;\; (3.4)$$ We notice that for \(k=1\) relation (3.4) becomes the Virial Theorem $$ 2<p^2> = 2<x^2> + 4g<x^4> \;\;\; (3.5) $$ and therefore the energy can be written as $$ E = <H> = 2<x^2> + 3g<x^4> \;\;\; (3.6) $$ The last fact we are going to need is that, in eigenstates of the Hamiltonian, \(<\mathcal{O}H> = E<\mathcal{O}> \) and so for \(\mathcal{O}=x^{k-1}\) we get $$ <x^{k-1}p^2>=E<x^{k-1}> - <x^{k+1}> -g<x^{k+5}> \;\;\; (3.7) $$ Combining all of the above and trying to replace every operator with the energy and powers of x we finally get $$ 4kE<x^{k-1}> + k(k-1)(k-2)<x^{k-3}> -4(k+1)<x^{k+1}> -4g(k+2)<x^{k+3}> \;\;\; (3.8) $$ Since \( <x^o>=1\) and, from symmetry \(<x^k>=0\) for all odd values of k, the only values we need to evaluate every moment \((<x^k>)\) is that of the energy and of \(<x^2>\). Notice that \(<x^4>\) can be evaluated from Virial’s Theorem (3.5).
Demanding Positivity
It is a fact of Fourier analysis that we can calculate a function if we know all of the moments. Therefore the question is, is there any way to constraint the possible values of \(E\) and \(<x^2>\)? The answer comes in the form of (2.9). The most general form of the operator \(O\), assuming it’s a function of x, is $$ \mathcal{O}= \sum_{i=0}^{K} c_ix^i \;\;\; (3.9)$$ In reality \(K\rightarrow \infty\) but if we want to calculate anything we must stay in a finite range. The positivity condition (2.9) becomes $$ \sum_{i=0}^{K} \sum_{j=0}^{K} <c_i^* x^i c_j x^j > = \sum_{i=0}^{K} \sum_{j=0}^{K}c_i^*<x^{i+j}>c_j =\sum_{i=0}^{K} \sum_{j=0}^{K} c_i^*\mathcal{M}^{ij}c_j \geq 0 \;\;\; (3.10) $$
This is just matrix multiplication between the \(K\times1\) matrix \(c\) and the \(K\times K\) matrix \(\mathcal{M}^{ij} = <x^{i+j}> \). In fact, since the constants \(c_i\) are arbitrary, (3.10) is the definition of a positively defined matrix. If we want to see if a certain pair of \(E\) and \(<x^2>\) correspond to a physical state we just calculate the first \(2K\) moments using the recursive relation (3.4) and then check if \(\mathcal{M}\) is positive. A simple positivity check (for a computer) is to make sure that all eigenvalues of the matrix are greater of equal to zero.
The algorithm is now clear, we scan through a possible range of values for our free variables and we check the positivity of \(\mathcal{M}\) for every value to get the physical ones. Since our K is finite, we get a small range of physical values instead of a single one. If we want more precision we increase K to include more moments. I won’t include arithmetic results here but it takes about 10 seconds for a computer to compute the right energies with less than 1% error.
Generalisation and Comments
Following the same process for an arbitrary potential \(V(x)\) we get the equivilent recursion relation $$ k(k-1)(k-2)<x^{k-3}> + 4Ek<x^{k-1}> - 4k<x^{k-1}V> - 2<x^kV’> = 0 \;\;\; (3.11) $$ where \(V’= \frac{dV}{dx}\). What we have found is pretty impressive. Without ever even mentioning Schrodinger’s equation, we have a procedure for finding the energies and eigenstates of any potential. In fact the only fact that we did use was the positivity of Hilbert space which as we mentioned was almost self evident. This simple demand not only constrained the possible states of the system but for high enough K they completely defines them. Nature really tied our hands.
Self-Adjointness and Boundary Conditions
In our previous analysis we didn’t mention anything about boundary conditions and in fact they appeared nowhere in our calculations. This is because, as we said then, we lied when we postulated Ehrenfest’s Theorem (2.11). The correction to this mistake is subtle and has to do with the definition of Hermitian operators and their relation to the boundary conditions. We will now remedy this.
Self-Adjointness and Operator Domains
Any operator \(\mathcal{O}\), if we let it act on every function \(\Psi\) of our Hilbert space, is very likely to map it into a function \(\Psi' \) that lies outside the acceptable solutions of our problem. For this reason, every operator is defined on an operator domain of Hilbert space \(D(\mathcal{O})\). The domain is defined by the boundary conditions of the wavefunctions \(\psi\) that occupy it (\(\psi \in (D(\mathcal{O}))\). We can now say that an operator \(A\) is self-adjoint \((A=A^\dagger)\) if it meets the following two conditions:
a) the operator must be symmetric, so \(<\phi,A\psi>=<A\phi,\psi>\)
b) the operator domains must be the same \(D(A)=D(A^\dagger)\)
If an operator, including the Hamiltonian, do not meet the above two conditions then \((A \neq A^\dagger)\) and we need to rethink relations like (2.11). This problem did not appear in our example Hamiltonian (3.1) because the space of acceptable solutions was the real line and so all operators had the same domain \(D(\mathcal{O}) = \{\psi | \psi(x\rightarrow \pm \infty) = 0 \} \).
Modified Ehrenfest
By definition of the Hamiltonian operator $$ i\frac{\partial \psi}{\partial t} = H \psi \;\;\; (4.1) $$ and assuming that the operator \(A\) is time independant, we write $$ \frac{d<A>}{dt} = <\frac{\partial \psi}{\partial t},A\psi> + <\psi,A\frac{\partial \psi}{\partial t}> = i(<H\psi,A\psi>-<\psi,AH\psi>) \;\;\; (4.2)$$
The first product is well defined assuming that \(\psi\in D(H) \cap D(A)\) as expected. The second product however is problematic if \((H\psi\not \in D(A)\). If we can algebraically compute the commutator \([A,H]\) for the whole real line (that means not only in the domain where the operator is self-adjoint), then \(AH=[A,H] + HA \) everywhere and the bracket in (4.2) can be written as: $$ <\psi,[H,A]\psi> + <H\psi,A\psi> - <\psi,HA\psi> = <\psi,[H,A]\psi> + <\psi,(H^\dagger -H)A\psi> \;\;\; (4.3) $$ In the second step we see that, since the Hamiltonian is no longer self-adjoint in the real line, we have to replace it by \(H^\dagger\) when we move it from one side of a product to the other. The theorem of time evolution is now written as $$ \frac{d<A>}{dt} = i<[H,A]> + i <(H^\dagger -H)A> \;\;\; (4.4) $$
We notice of course that in the case that the Hamiltonian is self adjoint \(H = H^\dagger \) (4.4) reduces to (2.11). Our generating function (3.11) is now $$ <[H,\mathcal{O}]> + <(H^\dagger -H)\mathcal{O}> = 0 \;\;\; (4.5) $$
The “anomalous” term \(\mathcal{A} = <(H^\dagger -H)\mathcal{O}>\) must now be computed separately for every operator \(\mathcal{O}\).
What follows is a worked out example to understand the concept of arithmetically calculating a commutator. If calculations bore you you can go directly to the conclusion.
An example: the half line
We are now ready to deal with a Hamiltonian that has boundary conditions $$H=p^2 + V(x) \;\;,\;\; x \geq 0 \;\;\; (4.6) $$
For many physical reasons, most problems have boundary conditions on the domain \(D(H)\) of the \(Robin\) variety: \(\psi(0)+A\psi’(0)=0\). Which explicitly means that \(D(H)=\{\psi(x)|\psi(0)+A\psi’(0)=0,x\geq 0\}\).
Let’s begin by looking at the operator \(\mathcal{O}_1=x^k\). As in the first paragraph, the commutator with H is: $$ [H,x^k]=[p^2,x^k]=-2ikx^{k-1}p - k(k-1)x^{k-2} \;\;\; (4.7) $$
but now we have the anomalous term which we will compute by integrating by parts: $$ \mathcal{A}_1 = <(H^\dagger -H)\mathcal{O}> = \int_0^{\infty}dx(-\psi''x^k\psi +\psi(x^k\psi)’’) = $$ $$ =-\int_0^{\infty}dx \psi''x^k\psi+[\psi(x^k\psi)’’-\psi’x^k\psi]_0^{\infty} + \int_0^{\infty}dx\psi''x^k\psi\;\;\; $$
The first term cancels out with the last and \(k \geq 0\) so $$ \mathcal{A}_1 = \lim_{x\rightarrow 0} kx^{k-1}\psi(0)^2 = \delta_{1,k}\psi(0)^2 \;\;\; (4.8) $$ where \(\delta_{1,k}\) is \(Kronecker’s\) delta.
We can already see that the boundary conditions appeared in our construction of the recursive relations. Doing the same for \(\mathcal{O}_2=x^k p\) we find that $$ [H,x^k p]= -2ikx^{k-1}p^2 - k(k-1)x^{k-2}p + ix^k V’(x) \;\;\; (4.9a) $$ $$ \mathcal{A}_2 = i\delta_{1,k}\psi(0)\psi’(0) \;\;\; (4.9b) $$
in the special case where \(k=0\) the anomalous term becomes $$ \mathcal{A}_{k=0} = i(\psi’(0))^2 + i\psi(0)^2(E-V(0)) \;\;\; (4.10) $$
Lastly, by using the energy like in (1.8) we get $$ <x^{k-2}p^2>=E<x^{k-1}> - <x^{k-1}V(x)> \;\;\; (4.11) $$
We can finally type the completed recursive relation for \(\psi’(0)=\frac{\psi(0)}{\alpha}\) and \(k>0: \) $$ 0 = 2k E<x^{k-1}> + \frac{1}{2}k(k-1)(k-2)<x^{k-3}>-2k<x^{k-1}V> - <x^k V’> $$ $$ + \delta_{k,2} \psi’^2_{0}+\delta_{k,1}\frac{\psi^2_{0}}{\alpha} \;\;\; (4.12) $$
for \(k=0\) $$ 0 = -<V’>+ (\psi'(0))^2 + \psi(0)^2(E-V(0)) \;\;\; (4.13)$$
Combining this relation and the boundary conditions we get $$ \psi(0)^2 = \frac{\alpha^2 <V’>}{1+\alpha^2(E-V(0)} \;\;,\;\; \psi’(0)^2 = \frac{<V’>}{1+\alpha^2(E-V(0))} \;\;\; (4.14) $$
And so our final recursive relation is $$0 = 2k E<x^{k-1}> + \frac{1}{2}k(k-1)(k-2)<x^{k-3}>-2k<x^{k-1}V> - <x^k V’>+ $$ $$ \delta_{k,2}\frac{\alpha^2 <V’>}{1+\alpha^2(E-V(0))} +\delta_{k,1}\frac{\alpha<V’>}{1+\alpha^2(E-V(0))} \;\;\; (4.15) $$
for \(k=1,2,…\) and not for \(k=0\) (for k=0 we have (4.13)). From here our algorithm for finding the physical states is identical to that on the full real line or for that matter any other space.
Conclusion and Sources
conclusion
The fact that the wave function is unitary and that the operators we use in Quantum Mechanics are self-adjoint is usually only mentioned in the beginning of any course and is then left to the side when one starts dealing with Schrodinger’s equation. I hope you now see that these two pillars of QM hold enough strength to almost uniquely define the solution to any problem. However this “bootstrapping” method is not limited to solving simple QM problems. There are many theories out there for which we indeed do not have an equivalent master equation such as Schrodinger’s and so demands of unitarity and other “self-evident” properties are the only thing have as a tool. As we’ve seen this is more than enough to get our foot in the door.
Back to homepage https://principiaphysicaegeneralis.com/
{Sources}
-
A good introductory paper on the subject which also deals with more issues on bootstrapping is: “X. Han. , S. A. Hartnoll and J. Kruthoff, Bootstraping Matrix Quantum Mechanics, Phys. Lett. 125 (2020) 041601”
-
Boundary conditions and other subtleties of bootstrapping are studied in “D. Berenstein. and G. Hulsey, Anomalous Bootstrap on the half line, Phys. Rev. D 106 (2022) 045029 .”
-
The very interesting problem of self-adjoint operators in QM is dealt with at length is “T. Juric, Observables in Quantum Mechanics and the Importance of Self-adjointness, Universe 8 (2021) 129 .”