Tim's blag

To content | To menu | To search

Sunday 25 November 2018

Reading out TP-Link HS110 on Linux/Raspberry Pi

I bought a TP-Link HS110 version 3.0 smart wi-fi plug for 30 EUR to read out power usage of some appliances, which I want to connect to a Raspberry Pi 3B+.

Device setup

Here we connect the HS110 to our existing WiFi network without using the TP-link Kasa app and without it ever connecting to the TP-link cloud server.

  1. Plug into wall socket, the wifi symbol will blink green-amber.
  2. Get python clients https://github.com/softScheck/tplink-smartplug
  3. Connect to HS110 access point TP-LINK_Smart Plug_XXXX.
    1. Get initial information: ./tplink_smartplug.py -t 192.168.0.1 -c info
  4. Disable cloud access:
    1. Set devs.tplinkcloud.com to resolve to 127.0.0.1 (or similar)
    2. Set cloud server to new value: using ./tplink_smartplug.py -t 192.168.0.1 -j '{"cnCloud":{"set_server_url":{"server":"xs4all.nl"}}}'. N.B. err_code:0 means no error
  5. Bind to existing WiFi network: ./tplink_smartplug.py -t 192.168.0.1 -j '{"netif":{"set_stainfo":{"ssid":"WiFi","password":"123","key_type":3}}}'. N.B. This command is not encrypted so it will leak your SSID and network. There is no way around it as the official Kasa app uses the same protocol.

Device read out

Now the device is connected to our WiFi network and we can use it as a measurement device

  1. Find out new IP
    1. On your router, look at connected devices, write down IP
    2. Ping broadcast address to find IP of all connected devices responding to ping: e.g. ping 192.168.0.255, the TP-link HS110 will respond
  2. Read out device: ./tplink_smartplug.py -t 172.16.0.134 -c energy

Accuracy

Accuracy is comparable to Brennenstuhl PM 231E offline power meter.

15W incandescent bulb

  • Reference measurement: 15.0±0.2W (average-by-eye)
  • TP-Link measurement: 15.03±0.09W (average of 10 measurements)

1300W heating iron

  • Reference measurement: 1340±5W (average-by-eye)
  • TP-link measurement: 1342W±5W (average of 14 measurements)

N.B. the heating iron's load drops 20W over the timeframe of a minute, such that these uncertainties are upper limits.

Power usage

The TP-link HS110 itself uses 4.3±0.2 Watt (as measured by Brennenstuhl PM 231E, which has a reported accuracy of 1% or 0.2W).

Speed

The plug starts up in ±8 seconds, and then takes ±3 seconds to join the WiFi network.

Alternatives

  1. Elgato Eve Energy Works with Bluetooth (vs WiFi), compatible with Homekit. Could work on Linux using Homebridge, but this sounded like more trouble than reading out over Wi-Fi/HTTPs
  2. Fibaro Wall Plug Works with Z-wave (vs WiFi), requires additional Z-wave dongle, which I don't have. It's also more expensive than the TP-link. After that it could work on Linux, e.g. via this OpenHab post or this Youtube Tutorial

Sources

Sunday 24 September 2017

Using PAR to protect against bit rot

Since OSX does not offer native protection against bit rot (for example with ZSH or other mature file systems), I've started using PAR2 to manually protect my photos. To do this, I've written some scripts that I'm documenting here for myself and others.

The process seems to work, however it's rather laborious. Some optimization might be useful/necessary.

Create PAR files for one directory

# Directory to create PAR for
PICDIR=./

# Directory containing metadata (e.g. log files) and par2 executable
MAINTDIR=/Users/tim/Pictures/maintenance/

# Go to directory, e.g. '20051203_vacation_trip'
cd $PICDIR
# Extract year from directory name
year=${$(pwd)##*\/}
# Base PAR file on year of the directory
parfile=${year}0000.par2
parlogf=${MAINTDIR}/${year}0000.par2.log

# Remove DS_Store files, these are not important and change often
find . -type f | grep .DS_Store$ | xargs rm

# Init log file with current date
date >> ${parlogf}
# nice: reduce prio for this command 
# caffeinate: prevent disk or computer sleep when running
# par2: create parity files recursively, 1% redundancy, with 10k blocks, for all files
# tee: store output to log file and show on screen
nice -n 10 caffeinate -ms ${MAINTDIR}/par2 c -R -r1 -b10000 ${parfile} ./* | tee -a ${parlogf}

Create PAR files for many directories

Since my pictures are stored in many different directories, and since par2 is not parallelized, the above can be extended to par many directories in parallel as follows.

First we define a function which does the above, in bash:

# We need bash since it supports functions
bash

# Define a function which pars one directory (basically script above)
function create_par {
  CHECKDIR=$1
  CURDIR=$(pwd)
  PHOTODIR=/Users/tim/Pictures/

  parcmd=/Users/tim/Pictures/maintenance/par2  
  parlogf=/Users/tim/Pictures/maintenance/creating.par2.log
  parfile=${CHECKDIR}0000.par2
  
  cd ${PHOTODIR}/${CHECKDIR}

  # Remove DS_Store files, these are not important and change often, polluting
  # the PAR2 archive
  find /Users/tim/Pictures/${CHECKDIR} -type f | grep .DS_Store$ | xargs rm

  # Move existing PAR files to archive dir
  mv *par2 ${PHOTODIR}/maintenance/0-oldpar/

  date
  echo "Now creating PAR in $(pwd)\n========================="
  nice -n 10 caffeinate -ms ${parcmd} c -R -r1 -b10000 ${parfile} ./*

  cd ${CURDIR}
}

Now we can simply run

parlogf=/Users/tim/Pictures/maintenance/creating.par2.log
create_par 2016* | tee -a ${parlogf}

to create par files for any directory. Using the simple command parallel, we can parallelize this:

export -f create_par
parallel -j 4 -k create_par ::: {2015,2010,2009,2008,2007,2006,2005,2003}* | tee -a ${parlogf}

Check PAR files for one directory

After creating PAR files, we need to verify the integrity once in a while, which can be done as follows

bash
function check_par {
  CHECKDIR=$1
  CURDIR=$(pwd)
  PHOTODIR=/Users/tim/Pictures/

  parcmd=/Users/tim/Pictures/maintenance/par2  
  parlogf=/Users/tim/Pictures/maintenance/checking.par2.log
  parfile=${CHECKDIR}0000.par2
  
  cd ${PHOTODIR}/${CHECKDIR}
  date
  echo "Now checking $(pwd)\n========================="
  nice -n 5 caffeinate -ms ${parcmd} v ${parfile} | grep -v "Target.* - found.$|^Load|^Scanning: "
  cd ${CURDIR}
}

then call the command

parlogf=/Users/tim/Pictures/maintenance/checking.par2.log
check_par $PICDIR

Or run in parallel again

export -f check_par
parallel -j 4 -k check_par ::: {0,1,2}* | tee -a ${parlogf}

Monday 4 March 2013

Clone an OS X volume using rsync

There are several commercial tools for cloning hard disks for OS X, including SuperDuper! and Carcon Copy Cloner. However, the same can be achieved by using rsync.

I wrote a script based on necolas' rsync_backup. It has several improvements over necolas' version, but the core is the same. I used Carbon Copy Cloner's list of files to exclude when cloning.

I tested the script on my machine using an external USB hard disk on OS X 10.7 and after cloning for a few hours (like the first Time Machine backup), I can simply boot off the USB disk by holding the 'Option' key. rsync can make incremental updates, such that next clones are much faster (~10 minutes in my case).

N.B. I do not guarantee anything for this script, if it breaks your computer: tough luck. There are some safe guards in the script, but I refuse to promise anything. I recommend to read through the script and verify that it works before use.

Monday 20 August 2012

Apple Time Capsule benchmarks

I got a 4th generation Apple Time Capsule a while ago and here are some benchmarks that I took.

Continue reading...

Thursday 17 November 2011

Python meets C: Cython

This bit is about optimizing code Python code using something that's closer to the metal (i.e. a CPU/GPU). Before you do anything about optimization, realize this:

Early Optimization is the root of all evil - Donald Knuth

If you still think you need to optimize your code, read on.

Continue reading...

Sunday 6 November 2011

iPhone 3GS battery usage

I had some trouble with my iPhone 3GS where the standby time and usage time was the same, so I started monitoring the battery usage. I solved the intial problem (fully resetting the whole phone helps) but thought it'd be interesting to continue monitoring battery usage and analyse the results. The results are presented below.

Continue reading...

Monday 27 June 2011

PyAna Python library

PyAna is a Python library for reading and writing (Rice-compressed (pdf)) ANA files. I wrote this when I was working with ANA files a lot, but this is no longer the case. Therefore, I am not maintaining the code any longer. The current version seems to be quite stable (no known memory-leaks), albeit a bit rough on the edges.

The library wraps some ancient C routines into a NumPy module. See PyAna @ github for more details. I wrote this with help from this NumPy recipe and the NumPy book.

Besides a useful library, this can also be used as boilerplate code if you want to write your own Numpy module.

Unix output, pipes

Sometimes you might want to diff the output of two commands instead of two files. To do this, simply call

diff <(ls /tmp/folder1) <(ls /tmp/folder2)

See this Linuxjournal article and this blogpost for more details on pipes etc.

Thursday 23 June 2011

Machiner precision: float vs double

I found a small code snippet on machine precision for float and double datatypes in C. Might be interesting for some people. I adapted the code slightly and put it online in a gist. I reproduced the code here for clarity.

Other interesting posts on this topic include this Stackoverflow topic which in turn refers this appendix on floating-point arithmetic..

Continue reading...

Monday 20 June 2011

Quick look plugins for mkv/gif/source code

Apparently there are plugins for Apple's Quick look such that you can view different file types. Here are some plugins that I found useful:

Video encode one-liners

Here are some one-liners for encoding videos in various formats. I use them to convert the videos from my digital camera to a more suitable format. I'm still not sure which one is best, but I'm currently using the combination of x264/aac/mp4.

  • mencoder h264/mp3/avi
mencoder $1 \
  -ovc lavc -lavcopts threads=4:vcodec=mpeg4:vrc_buf_size=1835:vrc_maxrate=9800:vbitrate=1250\
  -oac mp3lame -lameopts vbr=3 -af resample=48000:0:0,channels=2 -o $1-xvid_mp3.avi
  • mencoder h264/mp3/mkv
mencoder $1 \
  -ovc x264 -x264encopts turbo:bitrate=2000:subq=4:bframes=2:b_pyramid=normal:weight_b:threads=auto \
  -oac mp3lame -lameopts vbr=3 -af resample=48000:0:0,channels=2 -o $1-x264_mp3.mkv
  • ffmpeg h264/aac/mp4
ffmpeg -i $1 \
  -acodec aac -strict experimental -ab 128k -ar 48000 -ac 2 \
  -vcodec libx264 -vpre faster -crf 22 -threads 0 $1-x264_aac.mp4

There are several container formats, video- and audio-codecs to choose from. Containers do exactly that, they 'contain' the audio and video information, sometimes supplemented with things as subtitles. One well known container is Audio Video Interleave (avi), another is MPEG-4 Part 14 (mp4) and a newer and more advanced one is Matroska (mkv).

The ubiquitous and most well known audio codec is no doubt MPEG-2 Audio Layer III — or mp3 — and needs no introduction. Advanced Audio Coding (aac) is one successor of mp3 and has better performance but less compatibility.

Video codec Xvid is a successor to the DivX ;-) (including the smiley) codec which was a hacked version of Microsoft's MPEG-4 Version 3 video codec and is widely used for encoding videos. The newerh264 performs better and is slowly taking over.

More info: