Andy wrote: Interesting observation. I am thinking that they probably use the 100Mhz counter to synchronize with the GPS time every 0.6s and then keep a 10ns resolution time based on that. The photocathode interface to the FPGA most likely is connected to a high speed Serdes port running at many gigbits/second. Say, for instance, it is running at 32x the system clock at 3.2Gbit/second. Then every 100Mhz clock cycle you would get a 32bit word indicating the photo detection event within that 10ns block. This would give you a resolution of about 312ps. So you would then add to the 100Mhz counter value based on the bit position of the detection with 312ps resolution.



I would not expect any DRAM access during the data acquisition portion of the system as this would all be pipelined inside the FPGA and have very deterministic behavior.

I hope that makes some sense!

mnk wrote: It is quite easy to spot an error like this one. Note that uncertainty manifests itself in the result as fluctuation so if there really was +-10ns fluctuation they would know.



The FPGA system can be designed in very deterministic way. In fact I think that would be the first instinct of any engineer who works with FPGA for accuracy. It is extremely easy to test deterministic systems, and even if it is not so deterministic it would have shown up as a fluke over 3 years.



It is a very very longshot to assume something went wrong in the FPGA side - it is both easy to catch and easy to test.

mnk wrote: Also, the problem is not a "longer than expected timespan", the problem is "shorter than expected timespan".

AmonRa wrote: https://lh5.googleusercontent.com/-6VkNFIivRDM/ToBnHJEQD9I/AAAAAAAABok/xSt-T3v3ksU/h301/tesla.jpg

AmonRa wrote: http://cdn.gs.uproxx.com/wp-content/uploads/2011/09/neutrino-tesla-471x600.jpg

Nick wrote: mnk: How can we be sure all 3 of these potential sources would fluctuate? If the server room was at issue, it could well be systematic...

mrb wrote: I agree that the FPGA can and should be designed in deterministic way (it should not be hard). The OPERA team needs to confirm it, hence my question about it. I do agree that this 1st point I raise is the less likely out of the 3.

Anonymous Engineer wrote: http://www.xilinx.com/support/documentation/white_papers/wp402_SEE_Considerations.pdf

Maxwell's Daemon wrote: If they take hardware engineering anywhere near as seriously as CERN do (which they inevitably will have, this being a CERN operation), they'll have addressed this, somehow.



Additionally, 100Mhz doesn't necessarily correlate to 10ns temporal resolution - there are more than a few statistical methods you can apply to improve resolution beyond the immediately available single-measurement resolution, and physicists (I am one) are adept at employing them.



That said, I'd be interested to learn exactly what the spec of the FPGA is, and how they've allowed for their timing resolution.

Einstein number 2 wrote: It is unlikely an experimental error exist . We will have to accept the results soon .this is how science works after all .

Andrew wrote: It’s plausible that the FPGA is the source of their timing problems (especially since there aren’t many details of the FPGA), but it’s extremely unlikely that it’s due to any of the issues this author raises.



1) The chances that the FPGA implements “caching” in a manner similar to a CPU are next to zero. Caches make sense when you’re accessing data somewhat randomly (within a limited window) and repeatedly. The data access patterns for most signal processing algorithms are usually highly predictable (e.g. streaming), so there is no need for a cache. Even if DRAM is used for storing timestamps, that doesn’t mean you’d get variation in the input to timestamp part of the processing. One can easily tolerate DRAM access variation by using a queue at the DRAM controller. Both the Xilinx and Altera DRAM controllers have built-in queues, so I’d be really surprised if that were an issue.

2) All modern FPGAs have a clock manager which can increase or decrease the frequency of the clock actually running in the FPGA. (The Xilinx FPGAs call these MMCM – Mixed-mode clock managers). You can increase the internal frequency of the clock to 500 MHz or more depending on the FPGA with minimal jitter or skew. So even without better sampling techniques, there is likely less variation than a 100 MHz sample rate would imply. You can also sample on both clock edges if desired.



It’s a good idea to look into the FPGA, but caching and a 100 MHz clock are almost certainly not the issues. Still, the authors need to provide some more detail so we can look into it.



Thanks for the post!

Tim wrote: Interesting. You're speculating that they're using the FPGA for the time measurement. However, CERN has developed their own silicon for time measurements: http://tdc.web.cern.ch/tdc/hptdc/hptdc.htm

I would bet they are using that chip interfaced to an FPGA. I've evaluated this chip for my own purposes and I can attest that it is indeed very accurate.

Robert wrote: My question is:



Does CERN take in account the time (clock cycles) lost by the AD-converters during start-up at the detectors of the "emission" side? This is a fixed time for all ADC's in their system and is about the size of the discrepancy.



I have seen that they are using Acqiris digitizer boards for the detection. On the CERN site this is probably not a problem as I assume that their digitizers in a single instrument are identical and the relation between "trigger" and measurement is fixed (hard). In other words, the triggers and the measurement traces are posponed but their relation is kept.



But unless they are using the same digitizers for the Opera instrument a difference is eminent (and will have always have the same fixed length).



Worse case would be that the Opera instrument timing detection has been compensated for this and the CERN side not.



FYI I designed the system architecture and electronics of a cosmic particle detector (sensor network where nodes are at least 5 km's apart) in the Netherlands (HiSparc) which is also based on GPS and an airborne SAR radar system (RAMSES) with extreme sync (1.5 ps for clock, trigger and analog sign) between 1.5 GSPS ADC boards) where for both projects I had to take in account the delay caused by the ADC's.



I am available for personal discussion; send me a mail and I will respond.

mnk wrote: Nick, I was talking about uncertainly. Systematic errors are generally the opposite of uncertainty - they are always there, if not it will fluke.

Basically:

1. Systematic errors in deterministic systems are easy to catch and measure in tests

2. Uncertainty is easy to spot in the results as fluctuations.



Knowing a 5-6 CERN engineers I am 99.99% sure the FPGA is fine with 0.01 certainty :)

Robert wrote: I think you are looking at the wrong side of the track. You are writing:



"latencies of the order of 10-100 ns unexpectedly added or subtracted to a baseline thought to be constant could completely or partially explain the ==OPERA== results"



Cause of several reasons:



1. If they would add the latency then the time measured for the stream would be definitely be slower than the light speed. Cause it will be detected much later.



2. I cannot imagine any possible design where you could subtract a latency, the expression "wait" state says enough.



If there is latency, they can use a fifo and add the used fifo depth (works as a delay line) to add to the result which they probably do.



3. If what you say is true, then there would be a huge spread of the results, and they would be dismissed as faulty. The experiment would not be repeatable.



4. The problem is that things are not delayed but come earlier than expected.



In order to recontruct this, the mistake (if there is one) must be on the primary transmitting side... If you measure at CERN, you get a measuring delay but the neutrino stream does not listen to this. This measuring delay will shorten the measured time between the primary and secondary part of the experiment.



I also agree with Nick. I do not think that CERN engineers would release the FPGA without extensive testing. And as I said before the delay of the ADC's do not pose a problem for CERN as they probably are all the same.

mrb wrote: Tim: correct, I am speculating based on the lack of information about this 100 MHz source. All other time sources are extensively described and double or triple-checked, but this 100 MHz source is a black box that they don't explain.



Robert: I thought I was clear enough but apparently not :) Look at the green part of the schema on page 8. This is an estimation of the propagation delay between a target tracker strip and the FPGA. It is estimated to be 59.6 + 25 = 84.6 ns. When the FPGA detects an event at time T, it subtracts 84.6 ns to calculate the actual neutrino arrival time. If an extra 40 ns accidental latency is measured on this green section during system calibration, then the system would subtracts 124.6 ns instead of 84.6 ns while doing this computation during experiments and neutrinos would appear to arrive 40 ns too early. The paper also explains pretty well that delays in this section make things appear to occur too early.

Ryan wrote: lol @ all the posturing, wannabe physicists here. Keep it up losers!

OzoneJunkie wrote: Some more obscure links that may be of interest:



http://dl.dropbox.com/u/13409775/pac2011/WEOAN1.pdf

http://dl.dropbox.com/u/13409775/wrapper.pdf

OzoneJunkie wrote: Oh, and also:



http://www.ohwr.org/projects/cngs-time-transfer/wiki/Wiki

http://www.ohwr.org/documents/111

mrb wrote: OzoneJunkie: thanks! I quickly looked through the docs, at first I don't see anything pertaining to the FPGA platform, but I am going to continue reading...

Helen wrote: Will need to replicate! Using another technique

Kannan wrote: Don't they have tested the light beam also through the same FPGA? Even then if light is slower then it wouldn't be the FPGA, or neutrinos are really fast.

mullerpaulm wrote: Yes indeed, good exploring lads (except for Ryan...hey, are YOU a physicist?). And of course, this needs to be replicated.



Perhaps CERN could figure out how to send a radio signal released at the same time up through GPS down to OPERA's control room, work out the GPS delay each time using GPS to calibrate itself, and with that offset, compare actual arrival times as an independent check. Of course it was no big problem to compute the effective light-distance accurately enough using GPS etc. but this alternative path is also deterministic (I understand) to useful precision with the right equipment. That might have the benefit of eliminating many of the potential errors in the detection and timing systems. In any case, others will repeat the experiment with different equipment, baselines, and technical equipment.



Meanwhile, let's talk a bit around the general philosophy of Physics, and at that basic level, what this may mean if verified.



If it turns out that neutrinos fly at 1.000025c(light), we should remember that they are different, in a unique class as particles. The journalistic buzz about time travel and all the rest does not really measure the difficulty for physics. The science might relatively easily be able to accomodate a discrete, different class of particle, that travels at this speed.



Neutrinos have a miniscule interaction profile with other matter (witness passing through 700+ km of solid rock). Light slows down in glass and air, why? Perhaps light in a vacuum is not quite the last word. Particles with Higgs-mass (e.g. protons) are in one class, and can never reach c(light) per Relativity. Photons (carrying mass in the form of energy relating to wavelength) all travel at c(light) in a vacuum. For all of these, c(light) is the limit, and Relativity applies.



But perhaps uniquely, neutrinos (essentially free of any interactions and so able to fly at full speed even through solid rock) are just slightly different, in a special class, and in the end merely show us that c(light) is the speed limit for everything else. Some particles cannot fly at c, photons can and do (always), neutrinos in a completely different class run a bit faster. The presumptive neutrino 'mass' may be a different kind of mass, non-Higgs carrying 'apparent mass' arising from this excess speed, to wit, even that apparent mass might be an artefact of their traveling at 1.000025c(light).



Sure, if verified, this will open up new horizons in Physics, but I would be surprised if it massively overturned the basics relating to the classical, mass-carrying wave/particles of physics.



Paul M Muller (PhD physics).

Aule wrote: I think Mr. Muller has a point. This sounds like an index of refraction problem, and probably will cause no more than a minor sensation where physics would be forced to restate that the maximum speed possible for any information would that of neutrinos in vacuo, rather that light in vacuo. The fact the difference in speed is so small seems to be a dead giveaway.

Andrew Casper wrote: I completely agree that this could very well be explained by an error with the FPGA based DAQ, specifically the 100 MHz clock. I've built a few high speed data acquisition boards, which were, interestingly enough, based off a 100 MHz system clock. When I would run two of these units off their own clock I would develop a time difference of over 60 ns well within 0.6 S. If I was able to run both systems off the same clock, I could achieve alignment on the order of picoseconds. It's just very difficult to believe that multiple FPGAs could remain aligned to within tens of nanoseconds based on a 1.66 Hz pulse from a master clock, while relying on a local clock for timinng between pulses.

Matt wrote: Err no. They don't need this type of accuracy he said - but it's not that he pulled it out of his sleeve. They worked together with the timing specialists of course, which is what he said. And I hope you don't really believe you can get that type of kit 'off the shelf' - of course it is custom built! Who else on the planet needs stuff like that?

Thijs wrote: @Mr. Muller: I'm not a physicist, but I did a course on relativity during my math degree, and what baffled me during this course is that the speed of light has nothing to do with light as such. I was shown (and can probably retrieve) a deduction of special relativity based on the following two assumptions:

- Inertial systems moving at constant relative speed must see the same local physics (principle of relativity).

- There is a maximum speed in the universe (let's call it c ;-) )



Special relativity follows from describing what you see happening in the other inertial system. The speed limit does not mention any kind of particle. It turns out that light 'accidentally' travels with exactly this maximum speed (which is calculated as a product of electrical properties of the vacuum).



So, yes, it would be a problem if something turns out to be travelling faster than light (1). This would lead to the conclusion that either one or both of these assumptions is an inaccurate approximation. This would be the first indication that relativity is an approximation of reality, just like the photoelectric-effect demonstrated that Newtonian physics was an approximation of reality. It may not be a disaster in the sense that physicists will always be doomed to be working with approximations of reality, but it would turn physics as we know it on it's head, just like relativity did.



(1) As far as I know, physicists are still debating whether the collapse of the wave function constitutes 'something', but apparently in relativistic terms it does not constitute 'something' otherwise relativity would have been shaken up before.



PS. No, it cannot be a dispersion effect, because dispersion only slows things down relative to vacuum. No reliable observations exist of dispersion speeding signals up relative to vacuum since this would have caused the same upheaval as the CERN results are doing now.

Colin wrote: I'm as total layman, but sometimes see relationships.

It seems to me that all of the current "Big" questions in Physics and Cosmology recently all have some relationship to mass. The recent Higgs results (or lack of) from CERN, dark matter/energy, and now this result.

If Higgs is disproved, there would still have to be some real mechanism that performed it's "task", even if we have no idea where to start looking for it. Isn't it just possible that the only Neutrino's we CAN see are ones that are low probability special cases with "negative" mass (for want of a vocabulary)?

Which opens the door for "special" exceptions to mass force carrier rules, Higg's or whatever replaces it?

Just idle speculation on a rainy Thursday in the Great White North.

Be gentle...

mrb wrote: Matt: many scientists do need that. In fact, off-the-shelf devices using a standard protocol are being developed exactly for this case of applications that cannot have a GPS receiver at each node and where the precision of NTP is insufficient: http://en.wikipedia.org/wiki/Precision_Time_Protocol



Also, the IPN Lyon itself, who I recently learned developed the timing devices for OPERA, said they are looking to use PTP for future projects instead of implementing custom proprietary solutions like the one at OPERA.

Andrew Casper wrote: I ran a quick test to look at the accuracy of the experiment’s described clocking scheme. I created a 100 MHz counter that is reset every 600 ms by an external, 10 ns pulse, the value of the counter at reset was offloaded to a computer for analysis. To gain some insight into how sensitive this setup is to environment variables, I placed a small desktop fan near the FPGA and turned it on part way through the data collection. You can look at the results here: http://uploadingit.com/file/tau1jxj9ox8aktuo/large_clockingTest.png

You can see that the setup had an initial error of over 60 clock cycles and changed by over 50 clock cycles when the fan started blowing. Of course this is all dependent on the type of crystal you use to generate your clock...

It would be relatively easy to have the FPGA constantly count the number of clock cycles between pulses and adjust some external heater/cooler to constantly keep the oscillator at the correct frequency. Also, the magnitude of any error will be highly dependent on where in the 600 ms period between resets the experiment took place. They mention the total time of flight was less than 3ms, if the generation of the neutrinos was initiated by the master reset pulse, then the system would have very little time to accumulate significant error.

I think the point of all this is that without more information on how the timing was controlled on the FPGA it’s impossible to know if the timing scheme was correct. It’s certainly possible to use the described setup, with appropriate controls, to realize the needed accuracy. But it’s also possible to mess it up. I would, however, be willing to give them the benefit of the doubt...

mrb wrote: Andrew: cool demonstration of the "1 part in 1e6" accuracy of an FPGA's internal oscillator, thanks.

Dr Peter Gangli wrote: It would make sense to consider the effects of Heisenberg Uncertainty. Careful measurements can be made, numbers will be (have been) obtained, yet the location [x], and the momentum [p] of any particle can only be determined with the uncertainty defined by Heisenberg Uncertainty relationship.



Δp Δq ≥ ℏ/2



Relativity and quantum mechanics describe the same world but from the viewpoint of two different universes. The inherent theoretical uncertainty of the EXACT location [at a given time] and the EXACT velocity of any measured neutrino could have and should have been treated and considered.



Why not? The paper shows only time of flight scatter charts!

David wrote: I'd guess, if the FPGA designers weren't complete idiots, they'd timestamp data *before* they put them into a latency-suffering RAM, so no, caching and RAM latencies shouldn't lead to observable timestamp offsets. The 100 MHz clock also isn't likely a source of error. Either the 100MHz clock is locked to a much more accurate reference clock, or they have different clock domains for sampling and data processing, where the timestamping is done in the clock domain that is synchronous to the accurate reference clock.



Maybe the FPGA designers just didn't account for (all) their data processing pipeline's latencies, and so have a constant N*clock offset WRT timestamp. At a 100MHz clock-rate, 60ns is 6 cycles, that might be possible. Are they really only clocking at 100MHz? But no, those errors would be easily spotted when comparing different detectors' signals.

BozoQed wrote: "OPERA's observation of a similar time delay with a different beam structure only indicates no problem with the batch structure of the beam, it doesn't help to understand whether there is a systematic delay which has been overlooked," said Jenny Thomas, co-spokesman for the Chicago-based lab's own neutrino experiment, MINOS.