NTP time measurement


Here's a plot of the time error between the standard unix system-time, kept on time using NTP, and a much more accurate PTP-server based on White Rabbit that runs on a fancy FPGA-based network-card.

Note that without NTP a typical computer clock will be off by 10 ppm (parts-per-million) or more. This particular one measured about 40 ppm error in free-running mode (no NTP). That means during the duration of this 16e4 s measurement we'd be off by about 640 milliseconds (way off the chart) without NTP. With NTP the error seems to stay within 3 milliseconds or so. The offset of -16 milliseconds is not that accurately measured and could be caused by a number of things.


A White Rabbit test

Update3: Here's what happens if you disconnect the master from the switch. The slave clock runs off on its own, with about 5ppm drift compared to the reference clock. Once the fiber is connected again it takes a few seconds to re-sync and lock on to the master clock.


Update2: two different measurements, on the left with a short 2m fiber, and on the right with a few hundred meters of fiber to a WR-Switch, and a few hundred meters back.


Update: an improved measurement now shows some promise:


Testing White Rabbit at work. These are fancy network-cards connected by optical fiber which allow synchronization between the cards at better than 1 nanosecond level. My first results are a bit strange:


This is in "grandmaster" mode where we input a 1 PPS and a 10 MHz signal to one of the cards:


A second result in "free-running" master mode: