The hot spots conjecture

From Polymath Wiki
Revision as of 13:49, 15 June 2012 by Teorth (talk | contribs)
Jump to navigationJump to search

The hotspots conjecture can be expressed in simple English as:

Suppose a flat piece of metal, represented by a two-dimensional bounded connected domain, is given an initial heat distribution which then flows throughout the metal. Assuming the metal is insulated (i.e. no heat escapes from the piece of metal), then given enough time, the hottest point on the metal will lie on its boundary.

In mathematical terms, we consider a two-dimensional bounded connected domain D and let u(x,t) (the heat at point x at time t) satisfy the heat equation with Neumann boundary conditions. We then conjecture that

For sufficiently large t > 0, u(x,t) achieves its maximum on the boundary of D

This conjecture has been proven for some domains and proven to be false for others. In particular it has been proven to be true for obtuse and right triangles, but the case of an acute triangle remains open. The proposal is that we prove the Hot Spots conjecture for acute triangles!

Note: strictly speaking, the conjecture is only believed to hold for generic solutions to the heat equation. As such, the conjecture is then equivalent to the assertion that the generic eigenvectors of the second eigenvalue of the Laplacian attain their maximum on the boundary.

A stronger version of the conjecture asserts that

  1. For all non-equilateral acute triangles, the second Neumann eigenvalue is simple; and
  2. The second Neumann eigenfunction attains its extrema only at the boundary of the triangle.

(In fact, it appears numerically that for acute triangles, the second eigenfunction only attains its maximum on the vertices of the longest side.)

Threads

Basic concepts

Possible approaches

Combinatorial approach

Sturm comparison approach

How about approximating the eigenfunction by polynomials? The Neumann boundary conditions already imply that the corners are critical points. Perhaps starting with that observation, for sufficiently low degree, the geometry of the triangle implies that one of them must be a max/min. In an ideal world, a 2d version of the Sturm comparison theorem (if one exists) could then show that this feature of the polynomial approximation remains true for the actual eigenfunction.

In my understanding, Sturm-type results allow you to establish some qualitative property for solutions of an ODE (a 1d elliptic problem) provided that the desired property holds for a nearby solution of a nearby ODE. The property usually considered is the presence of a zero (node) of an eigenfunction in some interval. I think a similar result holds for critical points of eigenfunctions.

Say you could establish that the second eigenfunction of the Laplacian on a right angled triangle satisfies the desired maximum principle by showing that its only critical points are at the vertices (so that whatever the maximum is, it would have to be at one of these points). Then appealing to a comparison theorem might be able to show that the same property holds for almost-right acute triangles. Mind you, this is all speculation at this point…

Ah, so maybe a Sturm-type result can make rigorous the statement: “the second eigenfunction depends continuously on the domain”? By physical considerations, this statement ought to be true.

But then it seems this would only work to prove the conjecture for triangles close to right triangles?

As a side note, if such a continuity statement were to hold then the hot spots conjecture for acute triangles must be true by the following *non-math* proof: If it weren’t true then by continuity there would be an open set of angles where it failed to hold. But after simulating many triplets of angles the conjecture always holds. C’mon, What are the odds we missed that open set? :-)

It appears that Sturm’s classical work has far reaching generalizations, as described for instance in this monograph: Kurt Kreith, Oscillation Theory LNM-324, (Springer, 1973). In particular, Chapter 3 features some comparison theorems for solutions to elliptic equations.

Subdivision

Maybe ideas based on self-similarity of the whole triangle to its 4 pieces can help (i.e. modeling the whole triangle as its scaled copies + the heat contact). Then without going into the graph approximation (which looks fruitful anyway), one can see some properties.

One advantage in the graph case is that after dividing and dividing the triangles you get to the graph G_1 which is simply a tree of four nodes and there I think the theorem shouldn’t be too hard to prove. And from there there might be an argument by induction. In the continuous case, no matter how many times you subdivide the triangle, after zooming it it is still the same triangle.

On the other hand, in the continuous case, each of the sub-triangles is truly the same as the larger triangle. Does anyone know of good examples where self-similarity techniques are used to solve a problem?

In either case, there is the issue that while the biggest triangle has Neumann boundaries, the interior subtriangles have non-Neumann boundaries…

Something that is true and uses that the whole triangle is the union of the 4 congruent pieces is the following. From each eigenfunction of the triangle with eigenvalue lambda we can build another eigenfunction with eigenvalue 4 lambda by scaling the triangle to half length, and then using even reflection through the edges. Note that this eigenfunction will have an interior maximum since every point on the boundary has a corresponding point inside (on the boundary of the triangle in the middle). I’m skeptical this observation could be of any use.

Hmm… but, apart from the case of an equilateral triangle, when you divide a triangle into four sub-triangles, I don’t believe that the sub-triangles are reflections of one another.

Brownian motion

Suppose I start off a Brownian motion at time 0 at some point [math]\displaystyle{ x }[/math] in the triangle, and the Brownian motion reflects off the sides of the triangles. At time [math]\displaystyle{ t }[/math] the probability density for the Brownian motion to be at location [math]\displaystyle{ y }[/math] is given by the solution [math]\displaystyle{ v(t,y) }[/math] of the heat equation with [math]\displaystyle{ v(0,y)=\delta(x-y) }[/math] and Neumann boundary conditions. So if I can show that the Brownian motion is eventually more likely to be in a corner (per unit area) than anywhere else, that will gives us the result.

Here's a way to approach it: Suppose I fix [math]\displaystyle{ x }[/math]. I ask how many ways are there for Brownian motion to get from [math]\displaystyle{ x }[/math] to [math]\displaystyle{ y }[/math] in time [math]\displaystyle{ t }[/math]. For [math]\displaystyle{ t }[/math] large, If [math]\displaystyle{ y }[/math] is in a corner I might imagine that there are more ways of Brownian motion to get to [math]\displaystyle{ y }[/math] than otherwise, because the Brownian motion has all the ways it would in free space, but also a ton of new ways that involve bouncing off the walls. Perhaps you can using some argument showing that there are more paths into some corners than anywhere else in the triangle. (Perhaps, using the Strong Markov Property and Coupling?)

This approach is also discussed at http://www.math.missouri.edu/~evanslc/Polymath/Polymath.pdf, and a similar approach was pursued in [BB1999]. Important tools in this method are the synchronous coupling and mirror coupling of two Brownian motions.

Computational strategies

Consider the region in the plane [math]\displaystyle{ \mathcal{R} :=\{ (\alpha, \beta) \vert \frac{\pi}{2}\leq \alpha+\beta \lt \pi, 0\lt \alpha, \beta \leq \frac{\pi}{2} \} }[/math] Acute triangles will be characterized by angles [math]\displaystyle{ (\alpha, \beta, \pi-\alpha -\beta) }[/math], where [math]\displaystyle{ (\alpha, \beta) \in \mathcal{R}. }[/math]

So we can identify acute triangles with points in [math]\displaystyle{ \mathcal{R} }[/math] and ask: for each such triangle, where do the extrema of the 2nd Neumann eigenfunction of the Laplacian go?

Note that the line [math]\displaystyle{ \alpha = \beta }[/math] in [math]\displaystyle{ \mathcal{R} }[/math] gives us the isoceles triangles. Triangles corresponding to [math]\displaystyle{ (\alpha, \beta) }[/math] will have the same (upto similarity) eigenfunction as [math]\displaystyle{ (\beta, \alpha) }[/math] So we need to examine triangles corresponding to the region

[math]\displaystyle{ \mathcal{R}_1 :=\{ (\alpha, \beta) \vert \frac{\pi}{2}\leq \alpha+\beta \lt \pi, 0\lt \alpha\leq \beta \leq \frac{\pi}{2} \} }[/math]

The points [math]\displaystyle{ (\frac{\pi-\epsilon}{2}, \frac{\pi-\epsilon}{2}) }[/math] for [math]\displaystyle{ \epsilon\gt 0 }[/math] small have been examined in the section on nearly degenerate triangles. The point [math]\displaystyle{ (\frac{\pi}{3},\frac{\pi}{3}) }[/math] has been examined in the section on equilateral triangles.

If we can numerically demonstrate for a dense subset [math]\displaystyle{ \mathcal{J} }[/math] of [math]\displaystyle{ \mathcal{R}_1 }[/math] that the conjecture holds, then the suggestion is to use continuity arguments to conclude the result for all triangles in the region. More precisely, if on [math]\displaystyle{ \mathcal{J} }[/math] the extrema of the eigenfunctions are well away from other stationary points, and if the second derivatives are bounded away from zero, then the location of the extrema cannot shift dramatically as the angles in the triangle shift.

For each point in [math]\displaystyle{ \mathcal{J}, }[/math] we would construct an approximation [math]\displaystyle{ u_J }[/math] to the desired eigenfunction to high accuracy, and locate the extrema.

If one is using a Ritz strategy, these calculations can be made more efficient. Note that any acute triangle can be mapped to a right-angled one. The integrals involved need only be computed on the right triangle, with the pullbacks carefully accounted for.

Reformulation on reference (right) triangle

http://www.math.sfu.ca/~nigam/polymath-figures/transform.jpg

Let [math]\displaystyle{ \Omega,\hat{\Omega} }[/math] be the acute triangle and the reference triangle, respectively. The acute triangle has corners [math]\displaystyle{ (0,0), (a_1,0), (a_2,b_2) }[/math] and the reference triangle has corners [math]\displaystyle{ (0,0), (1,0), (0,1) }[/math]

Let [math]\displaystyle{ F:\hat{\Omega} \rightarrow \Omega }[/math] be the affine mapping [math]\displaystyle{ F(\hat\mathbf{x}) = B \hat{\mathbf{x}} = \mathbf{x}. }[/math] The matrix [math]\displaystyle{ B }[/math] is given by [math]\displaystyle{ \left(\begin{array}{cc} a_1 & a_2\\ 0 & b_2\end{array} \right) }[/math] Then if [math]\displaystyle{ p }[/math] is a scalar function on [math]\displaystyle{ \Omega }[/math] the corresponding function [math]\displaystyle{ \hat{p} }[/math] on the reference triangle [math]\displaystyle{ \hat{\Omega} }[/math] is given by

[math]\displaystyle{ \hat{p} = p\circ F, \,\mbox{and} \, (B^{-1})^T \hat{\nabla}\hat{p} = (\nabla p) \circ F }[/math]

The problem of locating the second eigenpair of the Neumann Laplacian on [math]\displaystyle{ \Omega }[/math] has two formulations, and a corresponding reformulation on the reference triangle.

Formulation 1: Find [math]\displaystyle{ (u,\lambda) \in H^1(\Omega) \times \mathbb{R}^+ }[/math] so that

[math]\displaystyle{ -\Delta u = \lambda u\,\mbox{in}\,\Omega,\,\, \frac{\partial u}{\partial n} =0\,\,\mbox{on}\,\partial\Omega,\, \int_\Omega u dV =0. }[/math]

Formulation 1 on reference triangle: Find [math]\displaystyle{ (\hat{u}, \lambda) }[/math] so that [math]\displaystyle{ -\nabla \cdot (M \nabla \hat{u}) = \lambda \hat{u}\,\,\,\mbox{in}\,\hat{\Omega}, M\nabla \hat{u} \cdot \hat{n} =0\,\,\mbox{in}\,\partial\hat{\Omega},\, \int_{\hat{\Omega}} \hat{u} (det(B)) dV = a_1b_2\int_{\hat{\Omega}} \hat{u} dV= 0. }[/math] Here the matrix [math]\displaystyle{ M = B^{-1} (B^{-1})^T = \frac{1}{a_1^2b_2^2} \left(\begin{array}{cc} a_2^2+b_2^2 & -a_1a_2\\ -a_1a_2 & a_1^2\end{array} \right) }[/math]

Formulation 2: Find the minimizer of the functional [math]\displaystyle{ \mathcal{I}_\Omega := \frac{ \int_\Omega |\nabla u|^2 dV}{ \int_\Omega | u|^2 dV} }[/math] over the set of non-zero functions in [math]\displaystyle{ H^1(\Omega) }[/math] with zero mean [math]\displaystyle{ \int_\Omega u dV =0. }[/math]

Formulation 2 on reference triangle: Find the minimizer of the functional [math]\displaystyle{ \mathcal{I}_{\hat{\Omega}} := \frac{ \int_{\hat{\Omega}} \hat{\nabla} \hat{u}^T M \hat{\nabla} \hat{u} (det(B))dV}{ \int_{\hat{\Omega}} | \hat{u}|^2 (det B) dV} = \frac{ \int_{\hat{\Omega}} \hat{\nabla} \hat{u}^T M \hat{\nabla} \hat{u} dV}{ \int_{\hat{\Omega}} | \hat{u}|^2 dV} }[/math] over the set of non-zero functions with zero mean [math]\displaystyle{ \int_{\hat{\Omega}} \hat{u} (det B) dV =0 \Rightarrow \int_{\hat{\Omega}} \hat{u} dV=0 . }[/math]

Discrete approximation of 2nd eigenfunctions

We're interested in numerical approximation of the 2nd Neumann eigenfunctions for triangles characterized by angles in the domain BDH in http://polymathprojects.files.wordpress.com/2012/06/triangle21.png Each point in this domain will require one eigenvalue calculation as follows.

Physical domain specification

To fix notation, we're considering the domain given in http://www.math.sfu.ca/~nigam/polymath-figures/sine.jpg We've fixed one corner at the origin (Corner 1), another at (1,0) (Corner 2), and specified the interior angles [math]\displaystyle{ \alpha,\beta,\gamma }[/math]. We first need to calculate [math]\displaystyle{ (a_2,b_2), }[/math] the coordinates of the third corner. This is done using the law of cosines: [math]\displaystyle{ A=\frac{\sin(\alpha)}{\sin(\gamma)}, B= \frac{\sin(\beta)}{\sin(\gamma)}, \Rightarrow a_2=\frac{1}{2}(1- (\frac{\sin(\alpha)}{\sin(\gamma)})^2 + (\frac{\sin(\beta)}{\sin(\gamma)})^2), b_2=a_2 \tan(\alpha) }[/math]

The area of this triangle is [math]\displaystyle{ b_2 }[/math]. Call this triangle [math]\displaystyle{ \Omega }[/math] as above.

Approximation algorithm specification

Step 0. We pick a point in parameter space, [math]\displaystyle{ (\alpha, \beta, \gamma) }[/math] and locate the associated physical domain, [math]\displaystyle{ \Omega. }[/math] We want to solve the eigenvalue problem on [math]\displaystyle{ H^1(\Omega). }[/math]

Step 1. We first discretize the physical domain [math]\displaystyle{ \Omega }[/math] by a triangular mesh [math]\displaystyle{ \Tau_h = \cup_{i=1}^N \tau_i }[/math] where the individual triangles [math]\displaystyle{ \tau_j }[/math] have sizes parametrized by a parameter [math]\displaystyle{ h\gt 0 }[/math], where [math]\displaystyle{ h \rightarrow 0 }[/math] as [math]\displaystyle{ N }[/math] increases.

Step 2. We now introduce a finite dimensional subspace [math]\displaystyle{ V_h \subset H^1(\Omega), V_h=span\{\phi_i\}_{i=1}^M }[/math] consisting of finite element basis functions [math]\displaystyle{ \phi_i }[/math] on [math]\displaystyle{ \Tau_h. }[/math] We have used both deg= 1 and deg= 2 finite elements, where [math]\displaystyle{ V_h:=\{ w_h \subset H^1(\Omega) \vert w_h \vert_{\tau_i}= \mbox{polynomial of degree}\, \leq deg ,\mbox{for each triangle} \,\, \tau_i \in \Tau_h\} }[/math]

Step 3. We want to solve the discrete analog of Formulation 1: find [math]\displaystyle{ v_h =\sum_{i=1}^M c_i \phi_i \in V_h, \lambda_h \in \mathbb{R}^+ }[/math] to solve the generalized eigenvalue problem [math]\displaystyle{ A_h C = \lambda_h B_h C, \qquad \int_\Omega v_h =0 }[/math] where [math]\displaystyle{ C=(c_1,c_2,...c_m)^T, (A_h)_{ij} = \int_\Omega\nabla \phi_i \cdot \nabla \phi_j, B_h = \int_\Omega \phi_i \phi_j }[/math]

The triangulation of each domain is specified using Matlab. The quadratures are performed in Matlab, using over 200 points on each triangle [math]\displaystyle{ \tau_i. }[/math]

The generalized discrete eigenvalue problem is solved using Matlab's eigs command, which in turn calls the ARPACK Arnoldi iteration with shift [math]\displaystyle{ \sigma=11 }[/math]. The iteration runs until a prescribed tolerance of 1e-8 is reached.

Step 4. We repeat Step 1-3 with smaller and smaller h, until we have achieved a tolerance for the desired eigenvalue calculation on this triangle. At the finest level, use a mesh with [math]\displaystyle{ h\lt 1/4500. }[/math]

Step 5. We repeat Steps 1-4 for a variety of points in parameter space.

Results are reported in the 'Numerics' section below.

Special cases

Isosceles triangles

Proposition 1 Let ABC be an acute isosceles triangle with AB=AC, and let u be a second eigenfunction which is symmetric with respect to reflection around the axis AD of symmetry (D being the midpoint of BC). Then the extrema of u only occur at the vertices A,B,C.

Proof Being symmetric, u is also a Neumann eigenfunction of the right-angled triangle ADB of mean zero. It must be the second eigenfunction of that triangle, since any eigenfunction of lower energy can be reflected to give a lower energy eigenfunction of ABC. By Theorem 1 of [AB2004], the eigenfunction on this triangle ADB (which is a Lip domain) is simple. By the argument used to prove Theorem 3.3 of [BB2000], after a change of sign, the eigenfunction u on ADB is non-decreasing in the AD and DB directions. As such, it cannot attain an extremum anywhere on ADB except at A and B (and hence, by symmetry, u cannot attain an extremum on ABC except at the vertices) unless it is constant and extremal either on a segment of AD touching A, or a segment of BD touching B. But either case will ensure that u is zero (this can be seen for instance by Bessel function expansion at the vertex where this constancy occurs; one can presumably also use standard unique continuation results). This proves the claim. []

Note that from Proposition 2.3 of [BB2000] that if we are in the thin case [math]\displaystyle{ AB/BC \gt 2j_0/\pi \approx 1.53096 }[/math], where j_0 is the first zero of the Bessel function J_0, then all second eigenfunctions are symmetric (because there are no anti-symmetric eigenfunctions, and one can decompose any eigenspace into symmetric and antisymmetric functions when there is an axis of symmetry such as AD). This gives the conjecture for isosceles triangles with apex angle less than about 38 degrees. This was strengthened in [LS2009] to show that any subequilateral triangle (apex angle less than 60 degrees) has a symmetric second eigenfunction, which is simple by Theorem 1 of [AB2004] (or by observing that the nodal line, being symmetric, cannot pass through A). We thus have:

Corollary 2 The hot spots conjecture is valid for isosceles triangles ABC with AB=BC and [math]\displaystyle{ \angle BAC \leq \pi/3= 60^\circ }[/math].

For super-equilateral triangles, it is a result of [LS2009] that the second eigenfunctions are anti-symmetric, and can thus be viewed as the lowest-energy Dirichlet-Neumann eigenfunctions on ADB with Dirichlet boundary conditions on AD and Neumann conditions on AB, BD. Since lowest-energy Dirichlet-Neumann eigenfunctions must have only one sign (either from the Courant nodal theorem or by minimising the Rayleigh-Ritz quotient), this implies that they are simple (otherwise one could take a linear combination of two lowest-energy eigenfunctions to obtain a signed eigenfunction).

Now consider the following synchronously coupled Brownian motion on ADB. Start with two points x, y in the interior of ADB. We then define coupled Brownian motions [math]\displaystyle{ X_t, Y_t }[/math] initialised as [math]\displaystyle{ X_0=x, Y_0=y }[/math] and solving the Brownian motion equation

[math]\displaystyle{ dX_t = dB_t; dY_t = dB_t }[/math]

with the same Brownian forcing term [math]\displaystyle{ dB_t }[/math], so long as one is inside the triangle ADB, but with reflection whenever one hits the Neumann walls AB and BD and with absorption (i.e. the motion terminates) whenever one hits the Dirichlet wall AD. (The corners are almost surely avoided, so one does not need to define what happens there.) Formal details of this construction can be found for instance in [BB2000] or [AB2004]. (Alternatively, one can take parallel Brownian motions on the whole plane starting from x,y, and fold them over into the triangle ADB, cutting them off when they hit AD.

Let n be the outward normal to AB. Let us say that y is to the right of x if either y=x, or the ray from x to y is oriented between the normal n and the ray from A to D.

Lemma 3 If y is to the right of x, then [math]\displaystyle{ Y_t }[/math] is to the right of [math]\displaystyle{ X_t }[/math] as long as both paths survive. (In particular, [math]\displaystyle{ X_t }[/math] will hit the absorbing boundary AD first.)

Proof (This is the argument from [BB2000], expressed somewhat informally; some more details are here.) As long as [math]\displaystyle{ X_t, Y_t }[/math] stay away from the boundary of the triangle, they move in parallel, and the claim is clearly preserved. If one of [math]\displaystyle{ X_t,Y_t }[/math] hits the absorbing boundary AD, there is nothing left to prove. If [math]\displaystyle{ X_t,Y_t }[/math] ever collide, then they will agree for all subsequent times and we are again done. As [math]\displaystyle{ Y_t }[/math] is to the right of [math]\displaystyle{ X_t }[/math], [math]\displaystyle{ X_t }[/math] cannot hit AB or DB until [math]\displaystyle{ Y_t }[/math] does - but when that happens, the ray joining [math]\displaystyle{ X_t }[/math] to [math]\displaystyle{ Y_t }[/math] moves either towards the ray DB or the ray AB, and either such operation preserves the property of [math]\displaystyle{ Y_t }[/math] being to the right of [math]\displaystyle{ X_t }[/math]. []

Corollary 4 If we consider any solution u to the heat equation on ADB with the Dirichlet-Neumann data which is non-decreasing in the directions n and AD, then it will remain non-decreasing in these directions for all time.

Proof This is immediate from Lemma 3 and the Brownian motion interpretation of the heat equation. But one can also establish it via a vector-valued maximum principle argument, as follows. At the initial time zero, we see from hypothesis that u is non-negative at time zero, and thus non-negative for all future times by the scalar maximal principle. Also, at time zero, that the gradient [math]\displaystyle{ \nabla u(0,x) }[/math] lies in the sector S through the origin with axes oriented in the directions DB, AB. The claim is then that the gradient stays in S for all time. For technical reasons (which are the same technical reasons that come up every time one tries to prove a maximum principle) it is convenient to first show that for any [math]\displaystyle{ \varepsilon\gt 0 }[/math], the gradient stays in the [math]\displaystyle{ \varepsilon(t+1) }[/math]-neighborhood [math]\displaystyle{ S_{\varepsilon(t+1)} }[/math] of S for all time [math]\displaystyle{ t \geq 0 }[/math], since the claim then follows by sending [math]\displaystyle{ \varepsilon }[/math] to zero. If the claim failed, then there must be a first time t and a point x in ADB where [math]\displaystyle{ \nabla u(t,x) }[/math] first touches the boundary of [math]\displaystyle{ S_{\varepsilon(t+1)} }[/math] at some value v. (It should be possible to show that u is C^1 in space continuously in time, which is enough regularity to justify the above claim.) This time cannot be 0, since the boundary stays away from S. The point x cannot be on the Dirichlet boundary AD, since at that point the gradient must lie on the ray from the origin oriented in the DB direction (because of the non-negativity of u), and thus lies in S and cannot equal v. If x is in the interior of ADB, then from construction we see that [math]\displaystyle{ \partial_t \nabla u(t,x) }[/math] points strictly outwards from v in [math]\displaystyle{ S_{\varepsilon(t+1)} }[/math], but that [math]\displaystyle{ \Delta \nabla u(t,x) }[/math] points inwards or tangentially, which is a contradiction since the heat equation implies that [math]\displaystyle{ \partial_t \nabla u= \Delta\nabla u }[/math]. If x is the corner D, then [math]\displaystyle{ \nabla u = 0 }[/math], a contradiction. If x lies on the interior of DB, then [math]\displaystyle{ v=\nabla u(t,x) }[/math] must be parallel to DB while remaining on the boundary of [math]\displaystyle{ \partial S_{\varepsilon(t+1)} }[/math], and v is thus the point of [math]\displaystyle{ \partial S_{\varepsilon(t+1)} }[/math] that is distance [math]\displaystyle{ \varepsilon(t+1) }[/math] from the origin in the BD direction. One can then argue as before (after performing a reflection if necessary) to show that [math]\displaystyle{ \Delta \nabla u(t,x) }[/math] points inwards or tangentially while [math]\displaystyle{ \partial_t \nabla u(t,x) }[/math] points outwards, again a contradiction. Similarly if x is on the interior of AB. This covers all cases, and the claim follows. []


Picking generic data obeying the hypotheses of Corollary 4 to isolate the lowest energy eigenfunction, we conclude that (up to sign) the lowest energy eigenfunction is also non-decreasing in the directions n, AD, and thus obtains its non-zero extremum at the vertex B, and vanishes only at the Dirichlet boundary AD, giving the hot spots conjecture for super-equilateral triangles.

Equilateral triangles

It is convenient to work in the plane [math]\displaystyle{ \{ (x,y,z): x+y+z=0\} }[/math], and with the triangle with vertices (0,0,0), (1,0,-1), (1,-1,0). A function on this triangle with Neumann data can be extended symmetrically to the entire plane, in such a way that it becomes periodic with respect to translation by (2,-1,-1), (-1,2,-1), and (-1,-1,2). In particular, it has a Fourier series arising from functions of the form [math]\displaystyle{ \exp(2 \pi i (ax+by+cz)/3 ) }[/math] for integers a,b,c summing to 0; also, the Fourier coefficients need to be symmetric with respect to the reflections [math]\displaystyle{ (a,b,c) \mapsto (-a,-c,-b), (-c,-b,-a), (-b,-a,-c) }[/math]. To minimise the Rayleigh quotient, we see that (a,b,c) should only live among the six frequencies (1,0,-1), (1,-1,0), (0,1,-1), (-1,1,0), (0,-1,1), (-1,0,1), leading to the complex solution

[math]\displaystyle{ \exp( 2\pi i (y-z)/3 ) + \exp( 2\pi i (z-x)/3 ) + \exp( 2\pi i (x-y)/3 ) }[/math]

and its complex conjugate as basis vectors for the two-dimensional eigenspace. (See [McC2011], [McC2002] for more discussion.) All real second eigenfunctions are projections of this complex eigenfunction.

With this normalization, the second eigenvalue is [math]\displaystyle{ 8\pi^2/9 }[/math]. After this, the next eigenvalue comes from the complex solution

[math]\displaystyle{ \exp( 2\pi i (2x-y-z)/3 ) + \exp( 2\pi i (2y-z-x)/3 ) + \exp( 2\pi i (2z-x-y)/3 ) }[/math]

and its complex conjugate, which have eigenvalue [math]\displaystyle{ 8\pi^2/3 }[/math].

An alternate (but slightly less symmetric) description is given in [LS2009], where the triangle is now in the plane with vertices (0,0), (1,0), [math]\displaystyle{ (1/2,\sqrt{3}/2) }[/math]. Now the second eivengalue is [math]\displaystyle{ 16 \pi^2/9 }[/math], with eigenfunctions

[math]\displaystyle{ 2 (\cos(\pi (2x-1)/3) + \cos(2\pi y/\sqrt{3})) \sin(\pi(2x-1)/3) }[/math]

(anti-symmetric around x=1/2) and

[math]\displaystyle{ \cos(2\pi(2x-1)/3) - 2 \cos(\pi(2x-1)/3) \cos(2\pi y/\sqrt{3}) }[/math]

(symmetric around x=1/2).

The complex second eigenfunction appears to map the equilateral triangle to a slightly concave version of that triangle, so that the extreme points are always on the corners.

Has anyone done numerical work yet to estimate how sensitive the eigenvalue degeneracy lifting is to perturbations of a starting equilateral triangle? Any proof will have to address the issue that the third eigenvalue may be arbitrarily close to the second and has an eigenfunction with extremes at a different pair of corners.

Here is some preliminary data. As I am not sure how to simulate the true eigenfunctions of the triangle, the following is for the graph whose eigenvectors should roughly approximate the true eigenfunctions:

For the case that a=b=c=1 (Equilateral Triangle): HotSpotsAny(64,1,1,1,2) yields the corresponding eigenvalue -0.001070825047269 HotSpotsAny(64,1,1,1,3) yields the corresponding eigenvalue -0.001070825047269 HotSpotsAny(64,1,1,1,4) yields the corresponding eigenvalue -0.003211901603854

We see that indeed the eigenvalue -0.001070825047269 has multiplicity two.

Perturbing a slightly we have that for a=1.1, b = 1, c = 1 (Isosceles Triangle where the odd angle is larger than the other two): HotSpotsAny(64,1.1,1,1,2) yields the corresponding eigenvalue -0.001078552707489 HotSpotsAny(64,1.1,1,1,3) yields the corresponding eigenvalue -0.001131412869938

Whereas for a=.9, b = 1, c = 1 (Isosceles Triangle where the odd angle is smaller than the other two): HotSpotsAny(64,.9,1,1,2) yields the corresponding eigenvalue -0.001004876221957 HotSpotsAny(64,.9,1,1,3) yields the corresponding eigenvalue -0.001062028119964

In either case we see that the third eigenvalue is perturbed away from the second.

What strikes me as interesting is how different the outcome is in increasing a by 0.1 versus decreasing a by 0.1. In the former case, the second eigenvalue barely changes whereas in the latter case the second eigenvalue changes quite a bit. I imagine this ties into the heuristic “sharp corners insulate heat” — reducing a to .9 produces a sharper corner leading to more heat insulation and a much smaller (in absolute value) second eigenvalue, whereas increasing a to 1.1 makes that corner less sharp but the other two corners are still relatively sharp so the heat insulation isn’t affected as much. Just a guess though…

It looks like the degeneracy lifting might be linear. It’s the ratio of the second and third eigenvalues that matters more, which is close to the same in either case, and approximately half as far from 1 as the perturbation. Using (9/10, 1, 1) should be the same as using (1, 10/9, 10/9) and then scaling all eigenvalues by 90%.

Obtuse triangles

The obtuse case was settled in [BB1999] using a reflected Brownian motion approach. The “hot” and “cold” spots are located at the most distant vertices.

Right-angled triangles

The right-angled case of the conjecture is known, for instance it follows from the more general results in [AB2002] on lip domains (domains between two Lipschitz graphs of constant at most 1). If ABC is a right angled triangle with right angle at B, then the second eigenfunction is simple, and (after changing sign if necessary) is monotone non-decreasing in the AB, BC directions.

Isosceles right-angled triangles

Consider the triangle with vertices (0,0), (1,0), (0,1). A Neumann function on this triangle can be reflected into a function on the entire plane which is periodic with periods (0,2), (2,0) and is symmetric with respect to reflection around the axes x=0, y=0, x=y. As such, the eigenfunctions can be easily computed by Fourier series; the second eigenfunction is simple, and takes the form [math]\displaystyle{ \cos(\pi x) + \cos(\pi y) }[/math] (up to multiplication by constants).

30-60-90 triangle

Another triangle which can be computed explicitly by reflection is the 30-60-90 triangle, as this can be reflected into the equilateral triangle (which in turn can be reflected into the entire plane). In particular in the plane [math]\displaystyle{ \{ (x,y,z): x+y+z=0\} }[/math], with the 30-60-90 triangle with vertices (0,0,0), (1,-1,0), (1,-1/2,-1/2), there is a simple second eigenfunction [math]\displaystyle{ \cos(\pi(x-y)/3) + \cos(\pi(y-z)/3) + \cos(\pi(z-w)/3) }[/math].

Thin sectors

Consider the sector [math]\displaystyle{ \Omega_0:=\{ (r \cos \theta, r \sin \theta): 0 \leq r \leq R; 0 \leq \theta \leq \alpha \} }[/math] with [math]\displaystyle{ \alpha }[/math] small. By separation of variables, one can use an eigenbasis of the form [math]\displaystyle{ u(r,\theta) = u(r) \cos \frac{\pi k \theta}{\alpha} }[/math] for [math]\displaystyle{ k=0,1,2,\ldots }[/math]. For any non-zero k, the smallest eigenvalue of that angular wave number is the minimiser of the Rayleigh quotient [math]\displaystyle{ \int_0^R (u_r^2 + \frac{\pi^2 k^2}{\alpha^2} u^2)\ r dr / \int_0^R u^2\ r dr }[/math]; the [math]\displaystyle{ k=0 }[/math] case is similar but with the additional constraint [math]\displaystyle{ \int_0^R u\ r dr = 0 }[/math]. This already reveals that the second eigenvalue will occur only at either k=0 or k=1, and [math]\displaystyle{ \alpha }[/math] small enough it can only occur [math]\displaystyle{ k=0 }[/math] (because the [math]\displaystyle{ k=1 }[/math] least eigenvalue blows up like [math]\displaystyle{ 1/\alpha^2 }[/math] as [math]\displaystyle{ \alpha \to 0 }[/math], while the [math]\displaystyle{ k=0 }[/math] eigenvalue is constant).

Once [math]\displaystyle{ \alpha }[/math] is small enough that we are in the k=0 regime (i.e. the second eigenfunction is radial), the role of [math]\displaystyle{ \alpha }[/math] is no longer relevant, and the eigenfunction equation becomes the Bessel equation [math]\displaystyle{ u_{rr} + \frac{1}{r} u_r + \lambda u = 0 }[/math], which has solutions [math]\displaystyle{ J_0(\sqrt{\lambda} r) }[/math] and [math]\displaystyle{ Y_0(\sqrt{\lambda} r) }[/math]. The [math]\displaystyle{ Y_0 }[/math] solution is too singular at the origin to actually be in the domain of the Neumann Laplacian, so the eigenfunctions are just [math]\displaystyle{ J_0(\sqrt{\lambda} r) }[/math], with [math]\displaystyle{ \sqrt{\lambda} R }[/math] being a zero of [math]\displaystyle{ J'_0 }[/math]. The second eigenfunction then occurs when [math]\displaystyle{ \sqrt{\lambda} R }[/math] is the first non-trivial zero of [math]\displaystyle{ J'_0 }[/math] ([math]\displaystyle{ 3.8317\ldots }[/math], according to Wolfram alpha). This is a function with a maximum at the origin and a minimum at the circular arc, consistent with the hot spots conjecture.

In Lemma 5.1 of [LS2009], this calculation is established for sectors of angle [math]\displaystyle{ \alpha \leq \pi/2.68 \approx 67^\circ }[/math]. After this point, the k=1 mode dominates instead.

Thin not-quite-sectors

From here, I was thinking along the following lines. The wedge can be turned into a nearly degenerate triangle [math]\displaystyle{ \Omega_1 }[/math] by joining the two rays by the chord connecting the points [math]\displaystyle{ (R,0) }[/math] and [math]\displaystyle{ (R,\epsilon) }[/math].

Now suppose there is a map F such that the thin side of the triangle can be written parametrically as [math]\displaystyle{ (R+F(\theta),\theta) }[/math]. We know precisely the 2nd Neumann eigenfunction and eigenvalue on the thin wedge [math]\displaystyle{ \Omega_0 }[/math]. We hypothesize this eigenfunction on the thin triangle is maximized at the same corner. We consider the situation on intermediate domains [math]\displaystyle{ \Omega_\delta }[/math] which are described as [math]\displaystyle{ \{(r(\theta),\theta)\vert 0\lt r\lt R+\delta F(\theta), 0\lt \theta\lt \epsilon\} }[/math] We can expand the Neumann eigenfunction in this intermediate region in terms of a series in powers of [math]\displaystyle{ \delta }[/math] as [math]\displaystyle{ u_\delta(r,\theta) = \sum_{n=0}^\infty \delta^n v_n(r,\theta) }[/math]. The first term in the sequence is just in terms of [math]\displaystyle{ J_0 }[/math]. I think one can compute the higher order terms easily, and that these will satisfy a related, but not identical, problem on the fixed wedge [math]\displaystyle{ \Omega_0 }[/math].

If the map [math]\displaystyle{ F }[/math] is smooth (analytic?) I think such a series must be in fact analytic; and that while [math]\displaystyle{ \delta=1 }[/math] may be outside the disk of analyticity, it is still in the domain of analyticity. This will enable us to get the desired lower bounds on the eigenvalue. In fact, I think the analytic continuation literature may have some pointers. I'll try to add references tomorrow.

However, if the mapping is not smooth this approach may fail. In this situation, one may yet be able to get a good lower bound on the true eigenvalue on the triangle. If [math]\displaystyle{ F }[/math] is not smooth, there will be a smooth map [math]\displaystyle{ G }[/math] which will not parametrize the chord, but some closeby curvilinear segment. The behaviour of the eigenfunction on the resulting domain can be obtained by the process described above.

I think it is important is to get away from the wedge (where the tangents to the arc are perpendicular to the sides of the sector, rendering it 'morally right-angled') in some smooth fashion to a geometry where the tangent to the curve [math]\displaystyle{ (R+\delta F(\theta), \theta) }[/math] is not perpendicular to the sides.

Thin triangles

Main article: thin triangles

General domains

A counterexample to the hot spots conjecture for a domain with two holes was established in [BW1999]. A further counterexample in [BB2000] shows that both the maximum and minimum may be attained in the interior. On the other hand, the conjecture was established for convex domains with two axes of symmetry in [JN2000].

Simplicity of eigenvalue

It is possible to show simplicity of the eigenvalue for non-equilateral triangles by getting a sufficiently good upper bound on the second eigenvalue [math]\displaystyle{ \lambda_2 }[/math], and a sufficiently good lower bound on the sum [math]\displaystyle{ \lambda_2+\lambda_3 }[/math] of the second and third eigenvalues. The upper bound on [math]\displaystyle{ \lambda_2 }[/math] can be obtained by using suitable test functions for the Rayleigh quotient, for instance by taking second eigenfunctions of reference triangles (e.g. equilateral, isosceles right-angle, or 30-60-90) and affinely transforming them to the triangle of interest. Lower bounds on [math]\displaystyle{ \lambda_2+\lambda_3 }[/math] can be obtained by lower bounding quadratic forms such as [math]\displaystyle{ \int_{\Omega} |\nabla u_1|^2 + |\nabla u_2|^2 }[/math] for orthonormal mean zero [math]\displaystyle{ u_1,u_2 }[/math], again by comparing with reference triangles where explicit bounds are known. An example of this method appears at http://pages.uoregon.edu/siudeja/simple.pdf

The nodal line

Courant's nodal line theorem tells us that the nodal set [math]\displaystyle{ \{u=0\} }[/math] of the second eigenfunction divides the domain T into at most two regions. Since u is mean zero, it must change sign, and so in fact it divides the domain into exactly two regions.

It is in fact known that the nodal line is a continuous curve (ref?). It is not a closed loop, because if it enclosed a region D, then the first Dirichlet eigenvalue of D would be equal to the second Neumann eigenvalue of T. But the first Dirichlet eigenvalue of D exceeds the first Dirichlet eigenvalue of T, and a classical inequality of Polya asserts that the second Neumann eigenvalue of T is less than or equal to the first Dirichlet eigenvalue of T, giving a contradiction. Thus the nodal line must instead hit the boundary of T at two points.

A variant of this argument shows that the two boundary points of the nodal line cannot lie on the same edge of the triangle T. For if this were the case, we could reflect around this edge and obtain a Neumann eigenfunction on a kite Q whose nodal line encloses a domain D. As before, the first Dirichlet eigenvalue of D equals the second Neumann eigenvalue of T, which is one of the Neumann eigenvalues of Q. On the other hand, as Q is convex, the results of [Fr1995] imply that the first Dirichlet eigenvalue of Q is greater than or equal to the third Neumann eigenvalue of Q. So we need to show that the unfolded version of the second Neumann eigenfunction of T is the second or third Neumann eigenfunction of Q. If this were not the case, then the second and third Neumann eigenfunctions of Q must be antisymmetric instead of symmetric. But then one of these eigenfunctions must change sign on T (as they will be orthogonal to each other), and thus have at least four nodal sign domains, contradicting the Courant nodal theorem.

Regularity and stability theory

In [CZ1994] some bounds for the Neumann heat kernels [math]\displaystyle{ P_t(x,y) }[/math] on a general domain are given; in particular, one can bound this kernel by a multiple of the Euclidean heat kernel for t small enough. Using this bound, [BP2008] showed uniform stability of the second eigenfunction (in the uniform topology) and eigenvalue with respect to uniform perturbations of the domain.

In a sector [math]\displaystyle{ \{ (r \cos \theta, r \sin \theta): r \gt 0; 0 \leq \theta \leq \alpha \} }[/math], solutions to the eigenfunction equation [math]\displaystyle{ \Delta u = -\lambda u }[/math] with Neumann data can be computed using separation of variables in polar coordinates as

[math]\displaystyle{ u(r,\theta) = \sum_{k=0}^\infty u^{(k)}(r,\theta) }[/math]

where

[math]\displaystyle{ u^{(k)}(r,\theta) = c_k J_{\pi k / \alpha}(\sqrt{\lambda} r) \cos( \frac{\pi k}{\alpha} \theta ), }[/math]

[math]\displaystyle{ c_k }[/math] are real coefficients, and [math]\displaystyle{ J_\beta }[/math] are the usual Bessel functions

[math]\displaystyle{ J_\beta(r) = \frac{1}{2^{\beta-1} \Gamma(\beta+\frac{1}{2}) \sqrt{\pi}} r^\beta \int_0^1 (1-t^2)^{\beta-1/2} \cos(rt)\ dt }[/math].

One consequence of this particular representation of the Bessel functions is that we observe the pointwise bounds

[math]\displaystyle{ c_\beta \leq J_\beta(r)/r^\beta \leq C c_\beta }[/math]

for all [math]\displaystyle{ 0\lt r\lt 1 }[/math] (say) and [math]\displaystyle{ \beta \geq 0 }[/math], and some constants [math]\displaystyle{ c_\beta, C }[/math]. Among other things, this gives pointwise bounds of the form

[math]\displaystyle{ u^{(k)}(r,\theta) = O( (r/r_0)^{\pi k/\alpha} (\frac{1}{\alpha} \int_0^\alpha |u(r_0,\theta)|^2\ d\theta)^{1/2} ) }[/math]

whenever [math]\displaystyle{ \sqrt{\lambda}r, \sqrt{\lambda} r_0 \lt 1 }[/math]. If u is locally in L^2, this gives bounds near the corner of the sector of the form

[math]\displaystyle{ u^{(k)}(r,\theta) = O( (r/r_0)^{\pi k/\alpha} ) }[/math]

uniformly in k. Among other things, this implies for acute sectors [math]\displaystyle{ \alpha\lt \pi/2 }[/math] that [math]\displaystyle{ \nabla^2 u }[/math] is bounded (and even Holder continuous) all the way up to the corner. In particular, Neumann eigenfunctions on acute triangles are [math]\displaystyle{ C^{2,\varepsilon} }[/math] all the way up to the boundary. The same analysis shows that third derivatives [math]\displaystyle{ \nabla^3 u }[/math] blow up by at worst [math]\displaystyle{ O(|x-x_0|^{-1+\varepsilon} }[/math] as one approaches an acute vertex [math]\displaystyle{ x_0 }[/math]. Away from the corners, u is smooth, as can be seen by reflection arguments and elliptic regularity.

On an acute triangle T, suppose we normalise the Neumann eigenfunction u as

[math]\displaystyle{ \displaystyle \int_T |u|^2 = |T|. }[/math] (E-0)

Integration by parts, using the Neumann condition to eliminate boundary terms, then yields

[math]\displaystyle{ \int_T |\nabla u|^2 = \lambda |T|. }[/math] (E-1)

One can carefully justify this integration by parts by cutting out the corners, replacing them with small circular arcs, applying Stokes' theorem, then sending the arcs to zero. A second integration by parts (still using the Neumann condition to eliminate boundary terms) gives

[math]\displaystyle{ \int_T |\nabla^2 u|^2 = \lambda^2 |T|. }[/math] (E-2)

It turns out that the [math]\displaystyle{ C^{2,\varepsilon} }[/math] regularity of u, combined with the [math]\displaystyle{ O(|x-x_0|^{-1+\varepsilon}) }[/math] decay of the third derivative, allows one to integrate by parts one final time to obtain

[math]\displaystyle{ \int_T |\nabla^3 u|^2 = \lambda^3 |T|, }[/math] (E-3)

because the effect of the boundary terms at the corner circular arcs still goes to zero. Note that the boundary terms still vanish, because they involve products of pairs of derivatives of u, one of which contains an odd number of normal derivatives, which then vanishes on the boundary by symmetry.

Among other things, this gives good C^2 bounds for a thin acute triangle ABC, with A near (0,0) and B,C within [math]\displaystyle{ O(\delta) }[/math] of (1,0) for some small [math]\displaystyle{ \delta = |BC| }[/math]. It should be the case here that [math]\displaystyle{ \lambda }[/math] is comparable to 1. Assuming this, we see from (E-0)-(E-3) that the first three derivatives of u have average size O(1) on T. The Bessel function analysis around A then already gives C^2 bounds of O(1) on the half of the triangle that is closer to A than to BC. On the other end, one can use the Poincare inequality to conclude that the average value of [math]\displaystyle{ |u|+ |\nabla u|+ |\nabla^2 u| }[/math] on the [math]\displaystyle{ O(\delta) }[/math]-neighbourhood of BC is also O(1), which by Bessel function analysis around B or C should then give C^2 bounds of O(1) on u in (a slightly smaller version of) this neighbourhood. Putting this all together, I think one has uniform C^2 bounds on all of T that are independent of delta.

Eigenvalue bounds

In [LS2009] it is shown that for any triangle of diameter D, the quantity [math]\displaystyle{ \lambda_2 D^2 }[/math] is minimised for degenerate acute isosceles triangles (or for sufficiently acute sectors). In [LS2009] it is shown that for a triangle of perimeter P, [math]\displaystyle{ \lambda_2 P^2 }[/math] is maximised at the equilateral triangle:

[math]\displaystyle{ 14.7 \approx j_{1,1}^{2} \leq \lambda_2 D^2 }[/math]

and

[math]\displaystyle{ \lambda_2 (P/3)^2 \leq \frac{16\pi^2}{9} \approx 17.5. }[/math]

In [Ch1975] the general upper bound

[math]\displaystyle{ \lambda_2 D^2 \leq 4j_{0,1}^2 \approx 23.1 }[/math]

for all convex domains is established; it becomes sharp for degenerate obtuse triangles.

Here is a simple eigenvalue comparison theorem: if [math]\displaystyle{ T: D \to TD }[/math] is a linear transformation, then

[math]\displaystyle{ \|T\|_{op}^{-2} \lambda_k(D) \leq \lambda_k(TD) \leq \|T^{-1}\|_{op}^2 \lambda_k(D) }[/math]

for each k. This is because of the Courant-Fisher minimax characterisation of [math]\displaystyle{ \lambda_k(D) }[/math] as the supremum of the infimum of the Rayleigh-Ritz quotient [math]\displaystyle{ \int_D |\nabla u|^2/ \int_D |u|^2 }[/math] over all codimension k subspaces of [math]\displaystyle{ L^2(D) }[/math], and because any candidate [math]\displaystyle{ u \in L^2(D) }[/math] for the Rayleigh-Ritz quotient on D can be transformed into a candidate [math]\displaystyle{ u \circ T^{-1} \in L^2(TD) }[/math] for the Rayleigh-Ritz quotient on TD, and vice versa. More sophisticated comparison theorems involving comparison of one triangle against two reference triangles appear in [LS2009].

One corollary of this theorem is that if one has a spectral gap [math]\displaystyle{ \lambda_2(D) \lt \lambda_3(D) }[/math] for some triangle D, then this spectral gap persists for all nearby triangles TD, as long as T has condition number less than [math]\displaystyle{ (\lambda_3(D)/\lambda_2(D))^{1/2} }[/math].

In [BP2008] it is also shown under the hypothesis of simple eigenvalues that the eigenfunctions vary uniformly with respect to small perturbations.

Miscellaneous remarks

If the sides of the triangle have lengths A, B, and C, currently you are using edge weights a=A, b=B, and c=C. This solves a different heat equation, when you pass to the limit, than the one you want. The assumption that each small triangle is at a uniform temperature gives an extra boost to the overall heat conduction rate in those directions along which the small triangles are longer, so in the limit you are modeling heat conduction in a medium with anisotropic thermal properties. It is just as though you started with a material of extremely high thermal conductivity and then sliced it in three different directions, with three different spacings, to insert thin strips of insulating material of a constant thickness.

Based on some edge cases, I suspect that the correct formulas for edge weights to model an isotropic material in the limit are as follows: [math]\displaystyle{ a = 1/(-A^2+B^2+C^2) }[/math], [math]\displaystyle{ b=1/(A^2-B^2+C^2) }[/math], [math]\displaystyle{ c=1/(A^2+B^2-C^2) }[/math]. Interestingly enough, these formulas are only defined (with positive weights) in the case of acute triangles, which suggests that this approach, if it works, may not provide an independent proof of the known cases.


Courant's nodal line theorem tells us that the nodal curve [math]\displaystyle{ \{u=0\} }[/math] of a second eigenfunction can cut the domain into at most two components (otherwise, with three or more components, one could reduce the Rayleigh quotient while still having mean zero). On the other hand, the eigenfunction must change sign on the boundary (if it were positive on the boundary, the eigenfunction equation would be impossible to satisfy at the minimum on the boundary), and so the nodal curve cannot be closed, but must instead cut across the domain.

Nadirashvili [Na1986] has shown that the multiplicity of the second eigenvalue is at most 2; a simpler proof was given in [BB1999].

Software

  • A MATLAB program used to generate and display the eigenvectors of the graphs G_n. The input format is HotSpotsAny(n,a,b,c,e), where a,b,c are the edge weights, n is the number of rows in the graph, and e determines which eigenvector of the graph Laplacian is displayed. (Note that in the proposal write up the graph G_n has 2^n rows so if you wanted to simulate the Fiedler vector for G_6 with edge weights a=1,b=2,c=3 then you would type HotSpotsAny(64,1,2,3,2)). For large n, the eigenvectors of the graph Laplacian should approximate the eigenfunctions of the Neumann Laplacian of the corresponding triangle… so the program can be used to roughly simulate the true eigenfunctions as well.

Numerics

Using a finite element method, here are the eigenfunctions of the Neumann Laplacian for a couple of acute triangles:

The domain is subdivided into a finite number N of smaller triangles. On each, I assumed the eigenfunction can be represented by a linear polynomial. Across interfaces, continuity is enforced. I used Matlab, and implemented a first order conforming finite element method. In other words, I obtained an approximation from a finite-dimensional subspace consisting of piece-wise linear polynomials. The approximation is found by considering the eigenvalue problem recast in variational form. This strategy reduces the eigenvalue question to that of finding the second eigenfunction of a finite-dimensional matrix, which in turn is done using an iterative method. As N becomes large, my approximations should converge to the true desired eigenfunction. This follows from standard arguments in numerical analysis.

Additionally, I've computed a couple of approximate solutions to the heat equation in acute triangles. The initial condition is chosen to have an interior 'bump', and I wanted to see where this bump moved. Again, I've plotted the contour lines of the solutions as well, and one can see the bump both smoothing out, and migrating to the sharper corners:

I think in the neighbourhood of the corner with interior angle [math]\displaystyle{ \frac{\pi}{\alpha} }[/math], the asymptotic behaviour of the (nonconstant) eigenfunctions should be of the form [math]\displaystyle{ r^\alpha \cos(\alpha \theta) + o(r^{\alpha}) }[/math], where r is the radial distance from the corner.

Next, if one had an unbounded wedge of interior angle [math]\displaystyle{ \frac{\pi}{\alpha} }[/math], the (non-constant) eigenfunctions of the Neumann Laplacian would be given in terms of [math]\displaystyle{ P_n(r,\theta) = J_{\alpha, n} (\sqrt(\lambda) r) cos(\alpha n \theta), n=0,1,2\ldots }[/math]. The [math]\displaystyle{ J_{\alpha,n} }[/math] are Bessel functions of the first kind. The spectrum for this problem is continuous. When we consider the Neumann problem on a bounded wedge, the spectrum becomes discrete.

This makes me think approximating the second eigenfunction of the Neumann Laplacian in terms of a linear combination of such [math]\displaystyle{ P_n(r,\theta) }[/math] could be illuminating.

This idea is not new, and a numerical strategy along these lines for the Dirichlet problem was described in [FHM1967]. Betcke and Trefethen have a nice recent paper [BT2005] on an improved variant of this 'method of particular solutions'.


Results of finite element+Arnoldi iteration calculations

Here are some graphics, to explore the parameter region (BDH) above. The calculations are performed using FEM code developed by Joseph Coyle. To enable visualization, data is plotted as functions of the (alpha,beta). We taking a rectangular grid oriented with the sides BD and DH, with 25 steps in each direction. So there are (25)^5 grid points.

Each parameter [math]\displaystyle{ (alpha,beta,gamma) }[/math] yields a triangle [math]\displaystyle{ \Omega }[/math]. We fix one side to be of unit length. For details, please see the section on Computational Strategies above.


For each triangle, the second Neumann eigenvalue and third eigenvalue (first and second non-zero Neumann eigenvalue) is computed. We also kept track of where max|u| occurs, where u is the second eigenfunction. This is because numerically I can get either u or -u.

A plot of the 2nd Neumann eigenvalue as we range through parameter space is here: http://www.math.sfu.ca/~nigam/polymath-figures/Lambda2.jpg

A plot of the 3rdd Neumann eigenvalue as we range through parameter space is here: http://www.math.sfu.ca/~nigam/polymath-figures/Lambda3.jpg

A plot of the spectral gap, \lambda_3-\lambda_2 multiplied by the area of the triangle \Omega as we range through parameter space is here: http://www.math.sfu.ca/~nigam/polymath-figures/SpectralGap.jpg

One sees that the eigenvalues vary smoothly in parameter space, and that the spectral gap is largest for acute triangles without particular symmetries.

For each triangle, we also kept track of the physical location of max|u|. If it went to the corner (0,0), I allocated a value of 1; if it went to (1,0) I allocated a value of 2, and if it went to the third corner, we allocated 3. If the maximum was not reported to be at a corner, we put a value of 0.

http://www.math.sfu.ca/~nigam/polymath-figures/LocationOfAbsMax.jpg

shows the result. Note that we obtain some values of 0 inside parameter space. This cannot be interpreted to mean the conjecture fails. Rather, this is a signal that eigenfunction is likely flattening out near a corner, and that the numerical values at points near the corner are very close.

Bibliography