Tagpython

scipy.stats.chi2.ppf() without scipy

Allantools has a dependence on scipy because it uses scipy.stats.chi2.ppf() (inverse of the chi-squared cumulative distribution function) in the code for confidence intervals.

When testing scipy takes quite a long time to install and the whole package seems a bit overkill for just this one function.

So I tried implementing it using just numpy. It sort of works, but not for corner cases where p is very close to 1.0 or k is a big number. I get for example:

Maybe there's something trick when numerically evaluating lower_gamma() using the series expansion that I am missing...

Leap second time-stamps for leap-seconds.list

This function generates time-stamps in the format used in leap-seconds.list (on e.g. NTP or PTP servers). The format is integer seconds since 1900-01-01T00:00+00.

Clock display with Python and Tkinter

clock_display_2015-04-24

A simple clock display with local time, UTC, date (iso8601 format of course!), MJD, day-of-year, and week number. Useful as a permanent info-display in the lab - just make sure your machines are synced with NTP.

This uses two small additions to jdutil:

Faster allantools with numpy

Update: these slopes are both linear, or O(n), where n is the input data size.

allantools_adev_bench_linear

However for the worst performing algorithm, MTIE, an O(n*log(n)) algorithm that uses a binary tree to store intermediate results is possible. This algorithm restricts the tau-values to 2**k (with k integer) times the data interval.

allantools_numpy_bench_2014-08-31

Danny Price has made a fantastic contribution to allantools  by taking my previous pure python code (hacked together in January 2014, shown in red) and numpyifying it all (shown in blue)!

For large datasets the speedups are 100-fold or more in most cases!
The speedups are calculated only for datasets larger than 1e5, since python's time.time() doesn't seem suitable for measuring 1 ms or shorter running times.

ZeromMQ REQuest/REPly pattern

More zeromq testing. See also PUB/SUB.

This pattern is for a reply&request situation where one machine asks for data from another machine. The pattern enforces an exchange of messages like this:

REQuester always first sends, then receives.

REPlyer always first receives, then sends.

Requester

The one-way delay here assumes we have machines with synchronized clocks. This might be true to within 1ms or so if both machines are close to the same NTP server.

The two-way delay should be more accurate, since we subtract two time-stamps that both come from the REQuesters system clock. Much better performance can probably be achieved by switching to C or C++.

Replyer:

ZeroMQ PUB/SUB pattern

I'm playing with zeromq for simple network communication/control. For a nice introduction with diagrams see Nicholas Piel's post "ZeroMQ an introduction"

Here is the PUBlish/SUBscribe patter, which usually consist of one Publisher and one or many Subscribers.

Publisher:

And here is the Subscriber:

allantools

Python code for calculating Allan deviation and related statistics: https://github.com/aewallin/allantools

Sample output:
allantools_output

Cheat-sheet (see http://en.wikipedia.org/wiki/Colors_of_noise).

Phase PSD Frequency PSD ADEV
1/f^4, "Black?" 1/f^2, Brownian, random walk sqrt(tau)
1/f^3 ?, "Black?" 1/f Pink constant
1/f^2, Brownian, random walk f^0, White 1/sqrt(tau)
f^0, White f^2 Violet 1/tau

Phase PSD is frequency dependence of the the power-spectral-density of the phase noise.
Frequency PSD is the frequency dependence of the power-spectral-density of the frequency noise.
ADEV is the tau-dependence of the Allan deviation.

These can get confusing! Please comment below if I made an error!

NTP failure detection

Update: Here's how the graph looks like when NTP traffic is enabled again:
ntp

A computer that doesn't receive NTP traffic for a while will have its system time drift quite a lot.

In the absence of NTP (UDP port 123) traffic, we can try to roughly ask for the current time over HTTP using wget from google with this shell script:

This outputs a string such as "Sat, 23 Nov 2013 09:14:45 GMT"

Now we can put together a python script that calls this shell script, converts the text-format time-stamp into UTC seconds, and compares against the system time. For plotting we then store the time-error in an RRDTool database. This script is called once per minute using cron.

Once we have all the values in our time_error.rrd database we can plot them with rrdtool graph. This is what I get:
system_vs_google_time
There is about -4 seconds of drift during 24 hours, or 46 us/s (46 ppm). If the drift is steady we can guess that the computer was on time 14/4 = ~4 days ago.

The script for creating the rrdtool database is this:

And the graph is created by this script. I am using the simple python-rrdtool python bindings - the object-oriented python-pyrrd may have neater syntax and be more pythonic.

GPIB on Linux

Update 2015 June: additions for Ubuntu 14.04LTS (where a symbolic link is required), and for the NI-USB-B dongle (which needs firmware loading). See also: Using the National Instruments GPIB-USB-B on linux

Notes on how to use GPIB from python on Ubuntu 12.04LTS. Tested with National Instruments USB-GPIB device and a recently bought NI GPIB PCI-E card.

  • Get linux-gpib source from http://sourceforge.net/projects/linux-gpib/
  • Building the python bindings requires libboost-python (and maybe libboost-python-dev)
  • build linux-gpib by running: ./configure, make, sudo make install
  • edit /etc/gpib.conf to fit your hardware. The default config board_type = "ni_pci" seems to work with the NI PCI-E card. For the USB-GPIB device we need board_type = "ni_usb_b"
  • load the kernel module, for the PCI-E card this is sudo modprobe tnt4882, for the USB-dongle this is sudo modprobe ni_usb_gpib
  • On Ubuntu 14.04LTS the library that gpib_config is linked against is installed in a location where the binary doesn't find it. The fix is to add a symbolic link in /lib/
    /lib$ sudo ln -s /usr/local/lib/libgpib.so.0 libgpib.so.0
  • configure: sudo gpib_config --minor 0 this reads and parses /etc/gpib.conf so it needs to be re-done if something is changed in gpib.conf
  • If gpib_config fails, and you have the older/slower NI-USB-B dongle, then you need to first load the firmware. This seems to not be required with the newer NI-USB-HS (to be confirmed?). Firmware loading requires fxload and the firmware from http://linux-gpib.sourceforge.net/firmware/. Find out where your USB-dongle is with lsusb and then load the firmware with something like:
    fxload -D /proc/bus/usb/001/002 -I niusbb_firmware.hex -s niusbb_loader.hex where you replace /001/002 with the numbers for your USB device. Now gpib_config should work.
  • By default only members of the gpib-group can use gpib, and only root has access to /etc/gpib0. If we want normal users to use gpib, add them to the gpib group (on Ubuntu the graphical user&group editor is gnome-system-tools)
  • Test!
    note: If "import gpib" fails in python you might have forgotten to install libboost-python before building linux-gpib. Python bindings will not be built&installed unless you have libboost-python.

I have two devices, so I've configured them like this in gpib.conf

And now we can try some python:

Jämsä-Jukola statistics

Here's the distribution of total time for teams in the Jämsä-Jukola relay:

Jukola 2013_fig

Here are the distributions of individual times for the seven legs of the relay.

Jukola 2013, 1st leg, 12.2 km_fig

The histogram from the first leg shows bunching of runners into groups - this is different from all the remaining legs.

Jukola 2013, 2nd leg, 13.0 km_fig Jukola 2013, 3rd leg, 14.4 km_fig
The 2nd and 3rd leg look quite similar. Note how the histograms lean to the left - there are many amateurs of varying ability which contribute to the long tail to the right.

Jukola 2013, 4th leg, 7.8 km_fig Jukola 2013, 5th leg, 7.7 km_fig

The 4th and 5th legs are shorter and faster.

Jukola 2013, 6th leg, 11.7 km_fig Jukola 2013, 7th leg, 15.1 km_fig
The 6th leg is again longer with the final 7th leg being the longest of them all.

Scraping scripts (requires minor modifications to downloaded results html file): jukola_scrape

© 2019 anderswallin.net

Theme by Anders NorénUp ↑