Fun with Chi-Squared

As mentioned before, the Chi-Squared inverse cumulative distribution is used when calculating confidence intervals in allantools.

When comparing computed confidence intervals against Stable32, there seems to be some systematic offsets, which I wanted to investigate.

While allantools uses scipy.stats.chi2.ppf(), It turns out that Stable32 uses an iterative method for EDF<=100, and an approximation from Abramowitz and Stegun: Handbook of Mathematical Functions for EDF>100. This 1964 book seems to be available online from many sites. The approximation (for large EDF) for the inverse chi-squared cumulative distribution is equations 26.4.17 (p410 in the PDF) (here nu is the equivalent degrees of freedom)

Abramowitz and Stegun, inverse chi-squared cumulative distribution, for large EDF.

Where xp is from the inverse of the Normal cumulative distribution, computed via the approximation 26.2.22 (p404 in the PDF)

Abramowitz & Stegun, approximate inverse Normal cumulative distribution, for 0<p<0.5

It also looks like the Stable32 source contains a misprint, where the a1 coefficient of 26.2.22 is used with two digits in the wrong order! (I did promise 'fun' in the title of this post!)

If we plot both scipy.stats.chi2.ppf() and the Stable32 implementation for various EDF and probabilities we get the following:

The leftmost figure shows Chi-Squared values (divided by EDF, to overlap all curves). There's not much difference between scipy and Stable32 visible at this scale. The middle figure shows the difference between the two algorithms. This shows very clearly the small (<100) EDF-region where the iterative algorithm in Stable32 produces a Chi-squared value in agreement with scipy to better than 4 digits (see the noisy traces around zero). This plot also reveals the shortcomings of the Abramovitz & Stegun approximation, together with the source-code misprint. The errors are largest, up to almost 1e-3 (in chi-squared/EDF) for EDF=101, and decrease with higher EDF. The rightmost figure shows what impact this has on computed confidence intervals - shown as relative errors. The confidence interval is proportional to sqrt(EDF/chi-squared). The dashed lines show the probabilities corresponding to 1-sigma confidence intervals, where we usually sample this function.

For 1-sigma confidence intervals we can now compare computed results with Stable32 and allantools to the comparison above of scipy and Abramovitz&Stegun + misprint.

Predicted and computed relative error for confidence intervals computed with Stable32 vs. allantools. The lower bound is shown in red, and the upper bound in blue. The relative error of the computed ADEV is shown as black symbols, for reference. The symbols are from computed deviations and confidence intervals for two test-datasets in allantools: phase.dat and a 5071a-dataset.

Are we having fun yet!? The comparison above between the two methods to compute the inverse chi-squared cumulative distribution seems to predict how Stable32 confidence intervals differ from allantools confidence intervals quite well! Note that Stable32 results are copy/pasted from the GUI with 5 significant digits, so some noise at the 1e-4 level is to be expected. Note that the relative error has a different sign depending on if we look at the lower or upper bound, and also depending on if we use the EDF<=100 iterative algorithm or the EDF>100 Abramovitz&Stegun+misprint approximation. For large EDF the upper bound (blue) of Stable32 is a bit too low, while the lower bound (red) is a bit too high.

Finally, the chi-squared figure and the expected confidence interval relative error figure, if we fix the misprint in the a1 coefficient:

Note with the a1-coefficient misprint fixed the approximation is exact at p=0.5, and there is no discontinuity in the difference-curves.

If the a1-coefficient misprint would be fixed in Stable32, we could expect the EDF>100 confidence interval relative errors between Stable32 and allantools to be roughly halved. The data-symbols are the same as in the figure above - agreement between the lines and symbols is not expected here.

Python code for producing these figures is available: chi2_stable32.py

The approximations used by Stable32 are not (yet?) included in allantools.

SFP-board in a box

Update: first production batch being tested in the lab 2020-12-17:

By popular demand, I've worked with Aivon to put my simple 'SFP2SMA' board (https://github.com/aewallin/SFP2SMA_2018.03) in a box, using an external Meanwell (+12V, -12V, +5V) PSU with a 5-pin DIN8 plug (model GP25A13D-R1B, Digikey 1866-1826-ND). The board can be used for simple frequency-transfer experiments together with almost any SFP-transciever. The TX SMA-connector is an input, and applies the waveform to the TX-pins of the SPF, producing modulated optical output. The dual RX SMA-connectors are driven by the SFP RX-pins, with some gain applied to produce around +10 dBm square-wave into a 50R load.

The front panel has three SFP status-LEDs: TX-fault (I've never seen this, but apparently the SFP should report if the TX-laser fails), RX LOS (loss of signal, active if you unplug the fiber), and mod_abs (module absent, active if no SFP is plugged)

After some adjustments to gains and component values the frequency-response is flat within 3 dB out to ca 400 MHz, and the ADEV looks reasonable at ca 1e-13/tau(s) (red datapoints in the plot below). These measurements were done with a CWDM SFP and a 2m single-mode loopback-fiber.

The box is a Hammond 1455L1201 (120 mm x 103 mm x 31 mm), with custom front and back panels.

400+ MHz photodetector with OPA818

One-Inch-Photodetecor in Thorlabs 1" mount. LT3093 negative-rail voltage regulator at the top, and LT3042 positive rail voltage regulator bottom right. OPA818 op-amp bottom center, with BUF602 output-driver bottom left.

After a number of failed attempts (with HMC799, OPA859), here is a reasonably fast One Inch Photodetector using OPA818, a FDS015 photodiode, and 1.2 kOhm transimpedance. Bandwidth is above 400 MHz, with the dark-noise and frequency-response in reasonable agreement with TIASim predictions.

With the detector blocked there's quite a lot of electrical feed-through with just the spectrum-analyzer TG-output on (see blue data points, especially above 1 GHz). I tried to correct for this roughly, shown as the orange data points.