This is the tenth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

Most of the progress since the last thread has been on the numerical side, in which the various techniques to numerically establish zero-free regions to the equation have been streamlined, made faster, and extended to larger heights than were previously possible. The best bound for now depends on the height to which one is willing to assume the Riemann hypothesis. Using the conservative verification up to height (slightly larger than) , which has been confirmed by independent work of Platt et al. and Gourdon-Demichel, the best bound remains at . Using the verification up to height claimed by Gourdon-Demichel, this improves slightly to , and if one assumes the Riemann hypothesis up to height the bound improves to , contingent on a numerical computation that is still underway. (See the table below the fold for more data of this form.) This is broadly consistent with the expectation that the bound on should be inversely proportional to the logarithm of the height at which the Riemann hypothesis is verified.

As progress seems to have stabilised, it may be time to transition to the writing phase of the Polymath15 project. (There are still some interesting research questions to pursue, such as numerically investigating the zeroes of for negative values of , but the writeup does not necessarily have to contain every single direction pursued in the project. If enough additional interesting findings are unearthed then one could always consider writing a second paper, for instance.

Below the fold is the detailed progress report on the numerics by Rudolph Dwars and Kalpesh Muchhal.

— Quick recap —

The effectively bounded and normalised, Riemann-Siegel type asymptotic approximation for :

enables us to explore its complex zeros and to establish zero-free regions. By choosing a promising combination and , and then numerically and analytically showing that the right-hand side doesn’t vanish in the rectangular shaped “canopy” (or a point on the blue hyperbola), a new DBN upper bound will be established. Summarized in this visual:

— The Barrier approach —

To verify that in such a rectangular strip, we have adopted the so-called Barrier-approach that comprises of three stages (illustrated in a picture below):

Use the numerical verification work of the RH already done by others. Independent teams have now verified the RH up to , and a single study took it up to . This work allows us to rule out, up to a certain , that a complex zero has flown through the critical strip into any defined canopy. To also cover the x-domains that lie beyond these known verifications, we have to assume the RH up to . This will then yield a that is conditional on this assumption. Complex zeros could also have horizontally flown into the ‘forbidden tunnel’ at high velocity. To numerically verify this hasn’t occurred, a Barrier needs to be introduced at and checked for any zeros having flown around, through or over it. Verifying the range (or ) is done through testing that the lower bound of always stays higher than the upper bound of the error terms. This has to be done numerically up to a certain point , after which analytical proof takes over.

So, new numerical computations are required to verify that both the Barrier at and the non-analytical part of the range are zero-free for a certain choice of .

— Verifying the Barrier is zero-free —

So, how to numerically verify that the Barrier is zero-free?

The Barrier is required to have two nearby screens at and to ensure that no complex zeros could fly around it. Hence, it has the 3D structure: . For the numerical verification that the Barrier is zero-free, it is treated as a ‘pile’ of rectangles. For each rectangle the winding number is computed using the argument principle and Rouché’s theorem. For each rectangle, the number of mesh points required is decided using the -derivative, and the t-step is decided using the -derivative.

Optimizations used for the barrier computations

To efficiently calculate all required mesh points of on the rectangle sides, we used a pre-calculated stored sum matrix that is Taylor expanded in the and -directions. The resulting polynomial is used to calculate the required mesh points. The formula for the stored sum matrix:

with and , where and are the number of Taylor expansion terms required to achieve the required level of accuracy (in our computations we used 20 digits and an algorithm to automatically determine and ).

We found that a more careful placement of the Barrier at an makes a significant difference in the computation time required. A good location is where has a large relative magnitude. Since retains some Euler product structure, such locations can be quickly guessed by evaluating a certain euler product upto a small number of primes, for multiple X candidates in an X range.

makes a significant difference in the computation time required. A good location is where has a large relative magnitude. Since retains some Euler product structure, such locations can be quickly guessed by evaluating a certain euler product upto a small number of primes, for multiple X candidates in an X range. Since and have smooth i.e. non-oscillatory behavior, using conservative numeric integrals with the Lemma 9.3 summands, , instead of the actual summation is feasible, and is significantly faster (the time complexity of estimation becomes independent of )

and have smooth i.e. non-oscillatory behavior, using conservative numeric integrals with the Lemma 9.3 summands, , instead of the actual summation is feasible, and is significantly faster (the time complexity of estimation becomes independent of ) Using a fixed mesh for a rectangle contour (can change from rectangle to rectangle) allows for vectorized computations and is significantly faster than using an adaptive mesh. To determine the number of mesh points, it is assumed that will stay above 1 (which is expected given the way the X location has been chosen, and is later verified after has been computed at all the mesh points). The number is chosen as

will stay above 1 (which is expected given the way the X location has been chosen, and is later verified after has been computed at all the mesh points). The number is chosen as Since for the above fixed mesh generally comes way above 1, the lower bound along the entire contour (not just on the mesh points) is higher than what would be the case with an adaptive mesh. This property is used to obtain a larger t-step while moving in the t-direction

— Verifying the range —

This leaves us with ensuring the range (where is the value of corresponding to the barrier ) is zero-free through checking that for each , the lower bound always exceeds the upper bound of the error terms.

From theory, two lower bounds are available: the Lemma-bound (eq. 80 in the writeup) and an approximate Triangle bound (eq. 79 in the writeup). Both bounds can be ‘mollified’ by choosing an increasing number of primes (to a certain extent) until the bound is sufficiently positive. The Lemma bound is used to find the number of ‘mollifiers’ required to make the bound positive at . We found that using primes was the max. number of primes still allowing an acceptable computational performance. The approximate Triangle bound evaluates faster and is used to establish the mollified (either 0 primes or only prime 2) end point before the analytical lower bound takes over. The Lemma-bound is then also used to calculate that for each in , the lower bound stays sufficiently above the error terms. The Lemma bound only needs to be verified for the line segment , since the Lemma bound monotonically increases when goes to 1.

Optimizations used for Lemmabound calculations

To speed up computations a fast “sawtooth” mechanism has been developed. This only calculates the minimally required incremental Lemma bound terms and only induces a full calculation when the incremental bound goes below a defined threshold (that is sufficiently above the error bounds).

where

(as presented within section 9 of the writeup, pg. 42)

— Software used —

To accommodate the above, he following software has been developed in both pari/gp ( https://pari.math.u-bordeaux.fr ) and ARB ( http://arblib.org ):

For verifying the Barrier:

Barrier_Location_Optimizer to find the optimal location to place the Barrier.

to find the optimal location to place the Barrier. Stored_Sums_Generator to generate in matrix form, the coefficients of the Taylor polynomial. This is one-off activity for a given , post which the coefficients can be used for winding number computations in different and ranges.

to generate in matrix form, the coefficients of the Taylor polynomial. This is one-off activity for a given , post which the coefficients can be used for winding number computations in different and ranges. Winding_Number_Calculator to verify that no complex zeros passed the Barrier.

For verifying the range:

N b _Location_Finder for the number of mollifiers to make the bound positive.

for the number of mollifiers to make the bound positive. Lemmabound_calculator Firstly, different mollifiers are tried to see which one gives a sufficiently positive bound at . Then the calculator can be used with that mollifier to evaluate the bound for each in . The range can also be broken up into sub-ranges, which can then be tackled with different mollifiers.

Firstly, different mollifiers are tried to see which one gives a sufficiently positive bound at . Then the calculator can be used with that mollifier to evaluate the bound for each in . The range can also be broken up into sub-ranges, which can then be tackled with different mollifiers. LemmaBound_Sawtooth_calculator to verify each incrementally calculated Lemma bound stays above the error bounds. Generally this script and the Lemmabound calculator script are substitutes for each other, although the latter may also be used for some initial portion of the N range.

Furthermore we have developed software to compute:

as and/or .

as and/or . the exact value (using the bounded version of the 3rd integral approach).

The software supports parallel processing through multi-threading and grid computing.

— Results achieved —

For various combinations of , these are the numerical outcomes:

The numbers suggest that we now have numerically verified that (even at two different Barrier locations). Also, conditionally on the RH being verified up to various , we have now reached a . We are cautiously optimistic, that the tools available right now, do even bring a conditional within reach of computation.

— Timings for verifying DBN —

Procedure Timings Stored sums generation at X=6*10^10 + 83951.5 42 sec Winding number check in the barrier for t=[0,0.2], y=[0.2,1] 42 sec Lemma bounds using incremental method for N=[69098, 250000] and a 4-prime mollifier {2,3,5,7} 118 sec Overall Timings ~200 sec

Remarks:

Timings to be multiplied by a factor of ~3.2 for each incremental order of magnitude of x.

Parallel processing significantly improves speed (e.g. Stored sums was done in < 7 sec).

Mollifier 2 analytical bound at is .

— Links to computational results and software used: —

Numerical results achieved:

Stored sums https://github.com/km-git-acc/dbn_upper_bound/tree/master/output/storedsums

Winding numbers https://github.com/km-git-acc/dbn_upper_bound/tree/master/output/windingnumbers

Software scripts used: