Benjamin's blog Stuff about Linux, C++ and more en-us Tue, 06 Sep 2016 00:00:00 +0200 <![CDATA[Re-using the last bash command argument]]> Re-using the last bash command argument

Tired of re-typing the same argument twice for different commands? For bash there is an easy solution:

mkdir testdir
cd !$

The ‘!$’ maps to the last argument of the previous command, a real time saver!

Tue, 06 Sep 2016 00:00:00 +0200 <![CDATA[Extracting mp4 files from AVCHD without transcoding]]> Extracting mp4 files from AVCHD without transcoding

My new Sony digital camera stores movies inside an AVCHD container. Luckily this format is supported on OSX natively and you can at least browse all clips inside with ease. However, if you want to export clips there is a limitation: OSX forces you to transcode the video when exporting to .mp4. This is slow and introduces quality loss. I started wondering if there is a better way and as it turns out there is :)

Internally the AVCHD container has a number of .MTS files, which in my case contain perfectly fine H264 video and AAC audio. It should be enough to re-multiplex (meaning ‘copy data stream but don’t re-encode’) these streams into an MP4 container format. MP4 is widely supported by most devices (and OSX itself). The go-to tool in these kinds of situation is ffmpeg and we will use this to re-multiplex the streams. As an added bonus, the timestamp of the MP4 file will be set to the original .MTS timestamp.

To get ffmpeg on Linux, just install the ‘ffmpeg’ package. On OSX an easy way to get ffmpeg is to install it via Homebrew

Here is the code, it should be run from the AVCHD/BDMV/STREAM directory level:


WORKDIR=`basename $PWD`

# Safety check
if ! [[ "$WORKDIR" == "STREAM" ]]; then
        echo "This script is supposed to run from AVCHD/BDMV/STREAM directory level, exiting now"
        exit 1

# Is ffmpeg installed
if ! [ -x "$(command -v ffmpeg)" ]; then
  echo "ffmpeg is not installed, exiting now"
  exit 1

# Re-mux all .MTS files into an mp4 container and set the timestamp of the mp4 to the same as the .MTS file
for i in *.MTS; do ffmpeg -i $i -vcodec copy -acodec copy -f mp4 ../../../`basename $i .MTS`.mp4 && touch -r $i ../../../`basename $i .MTS`.mp4 ; done

The resulting MP4 files will be put in the same directory as the AVCHD folder. For added convenience, you can download the file here.

Tue, 31 May 2016 00:00:00 +0200 <![CDATA[Simple UDP relay with NAT latching in Python]]> Simple UDP relay with NAT latching in Python

When you’re building a VOIP server you soon encounter the problem that a client is behind a NAT (instead of a directly reachable public IP). In this scenario the server can’t send packets directly to a client.

However there is a way around this and this is called ‘NAT latching’. Most NAT configurations automatically forward any reply that is addressed to the same port number that was used in sending back to the right client automatically.

So by configuring our application to receive on the same port number as it is using for sending UDP, once one packet is sent out from the client to the server we can set up bi-directional communication with this client (as long as the NAT binding stays open) by remembering which public ip/port combination it was sending from.

On the server side, we need something called a ‘relay channel’. This channel is nothing more that a pair of sockets that remember the origin of each data stream and use that as a destination for forwarding packets to the other side. It works like this (we use RTP in this example but it can be any UDP protocol):

Precondition: Client A and B are behind a NAT (so they have a non-public IP).

  1. Client A starts sending RTP from a specific UDP port X and simultaneously binds on this same port number X to receive RTP.
  2. Client B starts sending RTP from a specific UDP port Y and simultaneously binds on this same port number Y to receive RTP.
  3. Client A sends at least one packet from port X to the ‘left’ side of the relay channel.
  4. The relay server remembers the ip/port combination that the packet originated from (external_ip_of_client_a/port_X).
  5. Client B sends at least one packet from port Y to the ‘right’ side of the relay channel.
  6. The relay server remembers the ip/port combination that the packet originated from (external_ip_of_client_b/port_Y).

Now, if a packet comes in on the ‘left’ side of the relay channel, the server knows that it can be forwarded to external_ip_of_client_b/port_Y. And vice versa, if a packet comes in on the ‘right’ side of the relay channel, the server knows that it can be forwarded to external_ip_of_client_A/port_X.

The whole trick here is that a client needs to send at least 1 packet and then things will work fine :)

Because it can be a hassle to set up a full relay server when developing, I wrote this python script that implements the same functionality. It’s not recommended for production use but for development it works fine! Make sure to run it on a server that has a public IP.

#!/usr/bin/env python

# Simple script that implements an UDP relay channel
# Assumes that both sides are sending and receiving from the same port number
# Anything that comes in on left side will be forwarded to right side (once right side origin is known)
# Anything that comes in on right side will be forwarded to left side (once left side origin is known)

# Inspired by

import sys, socket, select

def fail(reason):
        sys.stderr.write(reason + '\n')

if len(sys.argv) != 2 or len(sys.argv[1].split(':')) != 2:
        fail('Usage: leftPort:rightPort')

leftPort, rightPort = sys.argv[1].split(':')

        leftPort = int(leftPort)
        fail('Invalid port number: ' + str(leftPort))
        rightPort = int(rightPort)
        fail('Invalid port number: ' + str(rightPort))

        sl = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
        sl.bind(('', leftPort))
        fail('Failed to bind on port ' + str(leftPort))

        sr = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
        sr.bind(('', rightPort))
        fail('Failed to bind on port ' + str(rightPort))

leftSource = None
rightSource = None
sys.stderr.write('All set.\n')
while True:
        ready_socks,_,_ =[sl, sr], [], [])
        for sock in ready_socks:
                data, addr = sock.recvfrom(32768)
                if sock.fileno() == sl.fileno():
                        print "Received on left socket from " , addr
                        leftSource = addr;
                        if rightSource is not None:
                                print "Forwarding left to right ", rightSource
                                sr.sendto(data, rightSource)
                else :
                        if sock.fileno() == sr.fileno():
                                print "Received on right socket from " , addr
                                rightSource = addr;
                                if leftSource is not None:
                                        print "Forwarding right to left ", leftSource
                                        sl.sendto(data, leftSource)

For added convenience, you can download the file here. Happy hacking!

Mon, 18 Apr 2016 00:00:00 +0200 <![CDATA[Magnet handler script for Firefox on OSX]]> Magnet handler script for Firefox on OSX

One thing I was missing when downloading torrents with ‘magnet:’ links was an easy way to transfer this link to my bittorent client (which is running on a different server). After copy-pasting many magnet links I finally decided to do something about this and write a small helper application that Firefox can call when it encounters a magnet link.

This example script will save the URL to a file in your home directory called torrents.txt but it should serve as an example to invoke other commands using the shell.

Here we go!

Step 1 - Script

Open the ‘Script Editor’ application and choose ‘Create new document’. Paste this script:

on open location this_URL
   #In case you want to display the URL that is being passed, uncomment the following line
   #display dialog "I'm doing something with this URL: " & return & this_URL

   tell application "Terminal"
      # Create a shell command to append the URL to ~/torrents.txt and exit
      set run_cmd to "echo \"" & this_URL & "\" >> ~/torrents.txt && exit"
      # Execute shell command
      do script run_cmd
   end tell

   # These three lines switch you back to Firefox, might want to change to your preferred browser
   tell application "Firefox"
   end tell
end open location

Now save the file, for example on your Desktop and with an example name of “My magnet handler”. Be sure to choose ‘File format: Application’ in the dropdown.

Step 2 - Hack the app file so it registers as a protocol handler

OSX doesn’t know yet that this new app can handler the ‘magnet:’ links so we have to hack the Info.plist that is inside the app.

  1. Go to your Desktop
  2. Right click on ‘My magnet’ and choose ‘Show Package Contents’
  3. Navigate to the ‘Contents’ folder
  4. Right click the ‘Info.plist’ file and open it with ‘Other’ –> ‘’
  5. At the bottom of the file (but before the final ‘</dict>’ and ‘</plist>’ tags) add another key/array pair by pasting this block:
      <string>My magnet handler</string>

This tells Finder that our app can handle URL’s starting with ‘magnet:’. Save the file and exit TextEdit.

Step 3 – Make finder aware of our app

This step is very counter-intuitive but locate your ‘My magnet’ file on your Desktop and move it to another folder, perhaps your home folder. Moving the file will let Finder re-read the Info.plist file and register it as a protocol handler.

Step 4 - Try it in your browser

Open your favorite torrent site and locate a magnet link. Click on it and if all went well you should be greeted with the ‘Launch Application’ dialog that already lists your application in the ‘Send to’ list. Select it, and press OK.

Your torrent URL should now be listed in a file called ‘torrents.txt’ located in your home directory!

Further expansion

Instead of echo’ing to a file, you can also run any other command you like. In my case, it’s logging (using SSH keys to prevent a password prompt) into my server and calling ‘deluge-console add’ to queue the torrent. In case you’re wondering, it looks a bit like this:

set run_ssh to "ssh \"deluge-console add " & this_URL & "\" && exit"
do script run_ssh

Happy downloading!

Wed, 09 Mar 2016 00:00:00 +0100 <![CDATA[A better solution to C++ enums]]> A better solution to C++ enums

One of the more popular posts on this blog is about textual enums in C++. You can find it here.

I’ve received a very friendly e-mail this weekend from Anton Bachin, the author of the better enums library. Some time has passed since I originally wrote the post and C++ has improved quite a lot in the meantime, his library seems a much nicer solution! So feel free to read along but if you have a need for this functionality, definitely consider using his library instead. Thanks Anton for bringing it to my attention!

The original post has been updated with this remark as well.

Mon, 23 Nov 2015 00:00:00 +0100 <![CDATA[Logging port access with iptables and logwatch]]> Logging port access with iptables and logwatch

I’ve recently installed a program (let’s call it Foo) on my home server that requires one port (let’s call that 12345) to be forwarded from the public interface on my ADSL modem to my internal server (via NAT translation). I’m always a bit hesistant to do this kind of action so why not ease my fears and log who’s accessing this port?

This idea requires two steps:

  1. Configuring iptables to log ‘socket open’ actions
  2. Making sure my daily ‘logwatch’ run does a DNS lookup on the found addresses

Step 1 - iptables configuration

Setting iptables up to log socket access is actually quite straightforward:

#log incoming Foo connections
iptables -I INPUT -p tcp --dport 12345 -m state --state NEW -j LOG --log-prefix "Foo inbound: "

This line logs any new TCP connection to port 12345 to the kernel log and /var/log/messages.

Execute the above command in a terminal (as root) and check that the rule is working with ‘nc’:

benjamin@nas:~$ nc localhost 12345
<some garbage indicating that socket was opened>
benjamin@nas:~$ dmesg -T|grep "Foo inbound"|tail -n 1
[Thu Oct 22 08:30:03 2015] Foo inbound: IN=lo OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:08:00 SRC= DST= LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=23309 DF PROTO=TCP SPT=37456 DPT=12345 WINDOW=43690 RES=0x00 SYN URGP=0

This message indicates that the iptables rule is working! Once you’re satisfied, you can persist this rule by adding it to your ‘/etc/rc.local’ file. There are probably nicer ways to do that but this works fine :)

Step 2 - logwatch configuration

Logwatch is an excellent tool to get a daily report about your server status. Imagine my surprise that the iptable rule automatically was processed into a neat report:

--------------------- iptables firewall Begin ------------------------

Listed by source hosts:
Logged 4 packets on interface br0
  From - 2 packets to tcp(12345)
  From - 2 packets to tcp(12345)

---------------------- iptables firewall End -------------------------

However, wouldn’t it be nice to see actual DNS hostnames (if available) for those addresses? After a lot of troubleshooting I found out that the ‘iptables’ service of logwatch doesn’t do lookups by default (probably for performance reasons).

Following the steps on this page you can fix that, in short it comes down to this:

# Copy default iptables module config to proper /etc directory
sudo cp /usr/share/logwatch/default.conf/services/iptables.conf /etc/logwatch/conf/services/

Now edit ‘/etc/logwatch/conf/services/iptables.conf’, search for ‘iptables_ip_lookup’ and make sure it looks like this:

# Set this to yes to lookup IPs in kernel firewall report
$iptables_ip_lookup = Yes

Now re-run logwatch manually and verify the results:

benjamin@nas:~$ sudo /usr/sbin/logwatch --hostformat split
<cut out a lot of stuff for this example>

 --------------------- iptables firewall Begin ------------------------

 Listed by source hosts:
 Logged 4 packets on interface br0
   From ( - 2 packets to tcp(12345)
   From ( - 2 packets to tcp(12345)

 ---------------------- iptables firewall End -------------------------

Mission accomplished, happy hunting :)

Thu, 22 Oct 2015 00:00:00 +0200 <![CDATA[Easy chroot jail creation]]> Easy chroot jail creation

While setting up an SSH jump host I had the need for a small chroot environment that users would end up in. The ‘regular’ way is to create a jail directory somewhere, set up basic directories (/bin /etc and so on) and proceed with copying the desired binaries into the jail. The next step is to use ‘ldd’ to figure out which dynamic libraries need to be copied into the jail. This is a lot of work!

Luckily (instead of getting some random script online and hoping it works fine) Debian includes a package called makejail. Makejail reads a small python file, this is an example (let’s call it

testCommandsInsideJail=["bash", "nc" , "nologin"]

Now run this command:


Makejail will now create the jail in ‘/jail’ (and clean any existing stuff in there if it exists already), copy ‘bash’ ‘nc’ and ‘nologin’ into the jail and figure out the library dependencies. Easy!

Thu, 24 Sep 2015 00:00:00 +0200 <![CDATA[Running autossh with OSX automator]]> Running autossh with OSX automator

On my work OSX laptop I have a need to have some ports forwarded to my NAS at home, until now I’ve been manually running the ssh command (using a script) but this becomes very annoying when connections drop etc. In an effort to automate things, I wanted to run autossh automatically in the background.

I followed this guide and everything was working, however now I got stuck with a rotating wheel icon in the status area (near the clock). That became annoying quickly so I found this stackexchange answer to guide me in the right direction.

Instead of running the a shell script action in automator (as the initial guide suggested), I now have an apple script that executes autossh directly (and in the background). Here it is for completeness:

on run {input, parameters}
   ignoring application responses
      do shell script "/opt/local/bin/autossh -M 20000 [rest of ssh parameters] -N [hostname to connect to]& &>/dev/null"
   end ignoring
end run

This runs the script in the background, you can check with ‘ps’ if autossh is actually running. No more spinning wheel!

Wed, 24 Jun 2015 00:00:00 +0200 <![CDATA[Fixing bash tab completion in XFCE]]> Fixing bash tab completion in XFCE

On my headless Linux NAS I’m running a VNC server to run the occasional X11 program remotely. Because I don’t need a full desktop environment, I used XFCE. However, when using a terminal session I noticed that tab completion in bash was not working.

As it turns out, XFCE maps the tab as a ‘switch window key’ preventing tab completion from working properly. Luckily this post on the ubuntu forums shows how to fix it (paraphrased here in case the original post disappears):

  • Edit the file ~/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-keyboard-shortcuts.xml
  • Find this line:
<property name="&lt;Super&gt;Tab" type="string" value="switch_window_key"/>
  • Change it to this:
<property name="&lt;Super&gt;Tab" type="empty"/>
  • Restart the VNC server:
vncserver -kill :1

Now things should be working again!

Wed, 10 Jun 2015 00:00:00 +0200 <![CDATA[Removing partial duplicate file names with awk]]> Removing partial duplicate file names with awk

I needed to clean up a bunch of files recently that contained both a common and unique part, something like this:


Note that there are two copies of ‘Episode1’ with a diferent ID part. Obviously I would only like to keep one of each episodes and ignore the whole -ID… part. This is how I solved it:

for i in `ls -t *mp4|awk 'BEGIN{FS="-"}{if (++dup[$1] >= 2) print}'`; do mv -v $i dup; done

So what happened here?

  • The directory listing is sorted by timestamp (newest first) so it favors the most recent versions.
  • The awk FS (field separator) is set to “-” to use the common part of the file name as the first field.
  • Now awk loops over each file name. It uses the common part of the file name (“Show_Episode1”) as an index into an array. The default counter value is 0 and any repeated file names will increase it to a value of >= 2.
  • If the counter value is >= 2, awk prints the complete file name (using the ‘print’ command). Note that this part only prints duplicates, the first file is never printed.
  • The output of the above steps are fed into a ‘for’ loop to serve as input to the ‘mv’ command that moves only the duplicate files to a separate ‘dup’ dir.
Tue, 11 Nov 2014 00:00:00 +0100 <![CDATA[Notes on ZFS]]> Notes on ZFS

I’ve recently upgraded my NAS to a HP N54L microserver and I decided it was time to migrate to ZFS. Luckily enough, ZFS on Linux became stable enough with version 0.6.3 to be used in production so this was good timing. ZFS is an interesting file system, it uses quite a bit of RAM but it is very flexible and provides per-block checksumming. A nice presentation can be found here:

To get me started, I followed this guide: It contains the basic setup commands and also provides a fix for the potential problems you can encounter with 4096-byte sector harddisks (most modern drives have this). Be aware that this guide doesn’t set a default mountpoint for the pool, this means specifying each filesystem mounptoint yourself (or just enable the default pool mountpoint). Some additional tips/notes can be found here:

To get more in-depth information, there is an excellent manual provided by Oracle (never thought I’d ever say that..) here: It covers most scenarios and contains a lot of examples. In my case, I started out with a pool on 1 drive, moved my data to it and then converted the pool to RAID-1 using the ‘zpool attach’ syntax. All this is covered in the manual.

Overall, I’m pretty satisfied with ZFS. I’ve skipped the native ‘exportnfs’ and ‘exportsmb’ functionality and just configured my /etc/exports and /etc/samba/smb.conf files myself, I heard there are still some bugs to be worked out in this department so I went the manual route. Also, the ability to specify that some filesystems should store two copies of each file (under the hood) is pretty cool and especially valuable for important data :)

Don’t forget to ‘cron’ a weeky ‘zpool scrub’ and not to fill the pool over 80/90% (opinions vary it seems).

Tue, 01 Jul 2014 00:00:00 +0200 <![CDATA[Find the longest filename in a directory tree]]> Find the longest filename in a directory tree

Ever wondered what the longest filename is in a directory tree? This command will tell you:

ben@xyz:/srv/blog$ ls -R | awk '{ print length, $0 }' | sort -rn | head -1
73 reorganising_large_directories_with_efficient_remote_rsync_update.doctree

On a similar note, this commands prints the longest path (so directories+filename) length:

ben@xyz:/srv/blog$ find | awk '{ print length, $0 }' | sort -rn | head -1
101 ./blog/html/_sources/2013/12/04/reorganising_large_directories_with_efficient_remote_rsync_update.txt
Tue, 29 Apr 2014 00:00:00 +0200 <![CDATA[Moving large directories]]> Moving large directories

I had a need to move a large directory tree on my Linux server. For this there are a number of options:

Using ‘mv’

Of course you can just issue ‘mv /sourcedir /destinationdir’ and be done with it. The downside is that if you interrupt the process, both source and target directories will be left in an inconsistent state. There is no easy way to resume the process.

Using rsync

Rsync is a Swiss army knife for a number of file-related operations and of course you can use it for local move operations as well. Rsync offers the big improvement of being able to interrupt and resume the move process in a smart and safe way. One of the limitations hower is that, even though it can delete the source files, it will leave you with a source directory full of empty subdirs. First of all, let’s move all files:

rsync -avr --remove-source-files /sourcedir/ /destinationdir/

Note the ‘–remove-source-files’, it does exactly what you think it does (after files have been successfully transfered). So what to do afterwards with the tree of empty subdirs? This is a nice trick I learned:

rsync -av --delete  `mktemp -d`/ /sourcedir/

This effectively syncs an empty directory to your sourcedir and from my (and other peoples experience) this is actually the quickest way to delete a large directory tree, even if there are files in it. It is supposed to be 15% quicker than ‘rm -rf’ due to ordering advantages but I’ll let you decide this for yourself.

Using tar and rm

While the rsync solution seems nice, it sometimes is a bit slow between two local disks. You can of course do ‘cp -a /sourcedir /targetdir’ beforehand and rsync afterwards but it seems to be even quicker to use tar for this purpose:

(cd /sourcedir ; tar cf - . ) | (cd /destinationdir ; tar xvpf -)

I read this trick on Stackoverflow and it seems to be a bit quicker indeed. I’ll let you decide this for yourself as well :)


For my moving task, I actually decided to combine both the ‘tar’ and ‘rsync’ tricks. This made for a quick copy, followed by rsync checking if everything was in sync and deleting the source files. Afterwards I used the ‘rsync to empty dir’ method to quickly delete all empty subdirs in the source directory.

Wed, 16 Apr 2014 00:00:00 +0200 <![CDATA[Two git tricks]]> Two git tricks

Two tricks I needed today and definitely want to save for future reference :)

Trick 1: How to reset a ‘master’ branch to a different branch and push it to the remote repository

Nice instructions can be found here:

Note 1: The above instructions force-push all branches to your specifc version, in step 4 it would be useful to specify that you only want the ‘master’ branch pushed :)

Note 2: You might have to do a ‘git reset –hard origin/master’ afterwards on other working copies that previously checked out the ‘master’ branch to resolve the merge conflict hell that can arise :)

Trick 2: Undo a force-pushed action on the remote repo

And as a result from the first note in the previous point, here’s how to use the reflog to undo a change you already pushed to remote:

Bonus trick: Diff the same file between two different branches

Use git difftool and note the ‘–’ separator which indicates filenames will be specified starting from that point.

git difftool branchname_1 branchname_2 -- Some/Directory/File.txt
Wed, 04 Dec 2013 00:00:00 +0100 <![CDATA[Reorganising large directories with efficient remote rsync update]]> Reorganising large directories with efficient remote rsync update

I’ve recently ran into a scenario where I wanted to re-organise my photo collection (basically move some files around). This folder is mirrored to a remote server with rsync for backup purposes. Rsync unfortunately has no way of detecting file moves and will gladly proceed to re-uploading any files you moved. Pushing 40GB of redundant updates through my home ADSL was painful, I wish I had known about this beforehand :)

However, for future reference here is a nice guide on how to prepare for this scenario and let rsync actually detect the moves via an intermediate step involving hardlinks:

Wed, 04 Dec 2013 00:00:00 +0100 <![CDATA[Diff the output of two processes]]> Diff the output of two processes

This is a small but useful bash trick to apply ‘diff’ to the output of two separate commands:

laptop-ben:~ benjamin$ diff <(echo a) <(echo b)
< a
> b
Wed, 11 Sep 2013 00:00:00 +0200 <![CDATA[Listing methods of an Objective-C class]]> Listing methods of an Objective-C class

One of the nicer things about Objective-C is that reflection is actually pretty easy to do. This code sample lists the methods of a class:

#include <objc/runtime.h>

// List the methods of the class instance "myClass"
unsigned int methodCount;
Method* methods = class_copyMethodList([myClass class], &methodCount);
for (int i=0; i<methodCount; i++)
	char buffer[256];
	SEL name = method_getName(methods[i]);
	NSLog(@"Method: %@", NSStringFromSelector(name));
	char *returnType = method_copyReturnType(methods[i]);
	NSLog(@"The return type is %s", returnType);
	// self, _cmd + any others
	unsigned int numberOfArguments = method_getNumberOfArguments(methods[i]);
	for(int j=0; j<numberOfArguments; j++)
		method_getArgumentType(methods[i], j, buffer, 256);
		NSLog(@"The type of argument %d is %s", j, buffer);

This code originally comes from, I’ve fixed it up so you can actually compile it :) Don’t forget to replace ‘myClass’ with your class name..

Thu, 04 Jul 2013 00:00:00 +0200 <![CDATA[Welcome to my new blog]]> Welcome to my new blog

After 4 years of using sphpblog (which is now very much unmaintained and looks a bit outdated) I’ve decided to move to a new blogging solution. After lots of experimenting I ended up with Tinkerer, a static blog generator written in Python. It’s less vulnerable than the previous PHP solution, looks nice and modern and offers all the formatting options of Sphinx. The documentation is a bit rough around the edges (lot of searching and experimenting helped) but I’ve managed to make it work.

All the blog postings on my old blog have been migrated to this blog so there should be no need to acces the old blog (which can still be found here:

Thu, 13 Jun 2013 00:00:00 +0200 <![CDATA[Excellent guide to troubleshooting iowait in Linux]]> Excellent guide to troubleshooting iowait in Linux

Recently my Sheevaplug was experiencing high load caused mostly by iowait. This excellent guide shows you exactly how to troubleshoot this kind of problem using tools that are usually available on most systems. Well recommended!

Wed, 29 May 2013 00:00:00 +0200 <![CDATA[More file renaming fun]]> More file renaming fun

One tool that made my life easier when I recently discovered it: ‘rename’. It allows you to specify a sed-like command line to easily search/replace part of file names like this:

rename s/"SEARCH"/"REPLACE"/g *.txt

Of course it also works for files in subdirs if you use none globbing like this

rename s/"SEARCH"/"REPLACE"/g */*.txt
Tue, 21 May 2013 00:00:00 +0200 <![CDATA[Generating random strings with openssl]]> Generating random strings with openssl

Quickly need a string of random characters? Just use the ‘openssl’ command like this:

benjamin@plug:~$ openssl rand -hex 20

Update 2017-01-10

I’m always greatful when people take the time to write me an e-mail that can improve a post. So this update is brought to you by Robert, thank you very much! By his benchmarks the below method is about 100% quicker (it could however be a bit less accurate but sometimes speed is more important):

benjamin@nas:~$ od -An -x -N 20 /dev/random|tr -d " \n"
Wed, 08 May 2013 00:00:00 +0200 <![CDATA[Blazingly fast sshfs]]> Blazingly fast sshfs

I’ve been using SSHFS for a while now as my “Swiss army knife” tool for quickly transferring a collection of files between two computers. I’m usually too lazy to bother with setting up an NFS export or SMB share and SSHFS does the trick nicely (as does rsync but it’s less interactive). This tool allows you to mount a directory on a remote system using only an SSH connection.

However, on my LAN the overhead of SSH encryption and compression gets in the way of transfer speeds. So on a trusted network, you can mount SSHFS like this:

sshfs -o Ciphers=arcfour -o Compression=no server://some/folder /mnt/some_local_folder

This will:

  1. Use the ‘arcfour’ cipher which is the fastest encryption method (and not very safe but we don’t care since it’s a trusted network)
  2. Disable the built-in compression SSH uses by default

The difference in transfer speed is very big, copying files from my Sheevaplug ARM server (which doesn’t has a very fast CPU) went from about 1.1 megabyte/s to 10 megabyte/s (wire speed on my 100mbit network basically).

By the way, if you still want to use rsync run it like this for the same setup:

rsync -e"ssh -c arcfour -o Compression=no" of rsync cmd...
Wed, 24 Apr 2013 00:00:00 +0200 <![CDATA[Search bash history with arrow keys]]> Search bash history with arrow keys

This is a nice trick to selectively browse your bash history with the up/down arrow keys. Replicated from here for my convenience:

Create ~/.inputrc and fill it with this:

"\e[A": history-search-backward
"\e[B": history-search-forward
set show-all-if-ambiguous on
set completion-ignore-case on

This allows you to search through your history using the up and down arrows i.e. type “cd /” and press the up arrow and you’ll search through everything in your history that starts with “cd /”.

Mon, 22 Apr 2013 00:00:00 +0200 <![CDATA[Converting OSStatus to plain text]]> Converting OSStatus to plain text

Just a small snippet so I don’t forget. Here’s how you convert an OSStatus to plain text:

OSStatus x = AudioSessionSetActive(true);
NSError *error = [NSError errorWithDomain:NSOSStatusErrorDomain code:x userInfo:nil];
Wed, 17 Apr 2013 00:00:00 +0200 <![CDATA[Book review: Sewer, Gas & Electric (Public Works Trilogy) by Matt Ruff]]> Book review: Sewer, Gas & Electric (Public Works Trilogy) by Matt Ruff ../../../_images/ruff_sewer_gas_electric.jpg

Because I just finished this book, I thought it would be nice to post a short review here. To start, here’s the description from the book sleeve:

The year is 2023. High above Manhattan, human and android steelworkers are constructing a new Tower of Babel for billionaire Harry Gant. In the streets below, a Wall Street takeover artist has been murdered, and Gant’s crusading ex-wife has been hired to find out why. Accompanying her is Ayn Rand, resurrected from the dead and bottled in a hurricane lamp to serve as an unwilling assistant. Other characters in this extraordinary novel include submarine ecoterrorist Philo Dufrense; a Volkswagen Beetle possessed by the spirit of Abbie Hoffman; and Meisterbrau, a mutant great white shark running loose in the sewers beneath Times Square-all of whom, and many more besides, are caught up in a vast conspiracy involving Walt Disney, J. Edgar Hoover, and a mob of homicidal robots.

Now that you read that, you probably understand why I didn’t even try to write a better summary right? This book lives at an intersection of Hitchhikers Guide to the Galaxy and the work of Neal Stephenson (Reamde, Cryptonomicon, Anathem). Matt Ruff’s writing style is very humorous and contains a lot of very strange characters and subplots just like HHGTTG. Coupled with this is the epic scale of the book and the very detailed descriptions of the world the story is set in that I have grown to love in Neal Stephenson’s books. The book at times is almost written like the plot of an action movie (again echoing Reamde) and once you start it’s hard to put it down. Like other books with complex storylines, the ending is never as good as the actual story (hard to wrap things up in a reasonable way usually) but this book actually does a decent job! Highly recommended!

Amazon link

Tue, 09 Apr 2013 00:00:00 +0200 <![CDATA[Adjusting timestamps on JPG and regular files]]> Adjusting timestamps on JPG and regular files

Having returned from a holiday recently, I found out that my camera was set to a different timezone. Oh dear, now all the (EXIF) timestamps are wrong! Luckily there’s a tool called ‘exiv2’ on Debian that can easily fix this (among many other functions):

First, let’s adjust all timestamps and deduct 1 hour from them:

exiv2 -a -1 *.jpg

If you need to add time, specify a positive number of hours after the ‘-a’ parameter

Next, we want to adjust the file timestamp to match the new EXIF date/time:

exiv2 -T *.jpg

Now we want to adjust the timestamps for any .MOV files as well. Since there is no EXIF tag, I will just adjust the file timestamps directly with ‘touch’:

for i in *.MOV; do touch -r $i -d '-3600 seconds' $i; done

So what does this last line do? For each MOV file, it takes the current timestamp (-r) as a reference and deducts 3600 seconds (1 hour) from the timestamp. Wrap this in a loop to process all files and you’re done! Quick and easy fix and now the timestamps will line up again with my other devices :)

Tue, 09 Apr 2013 00:00:00 +0200 <![CDATA[Changing SD card in sheevaplug fails to boot]]> Changing SD card in sheevaplug fails to boot

When I replaced the SD card in my Sheevaplug this weekend, the device failed to boot from it. As it turns out, some SD cards are more quickly initialized than others. To fix this, follow these steps:

  1. Attach a min-USB cable to the plug device
  2. On a debian PC execute these steps to attach a serial console (you might need to reboot the unit):
modprobe usbserial vendor=0x9e88 product=0x9e8f
modprobe ftdi_sio vendor=0x9e88 product=0x9e8f
sudo apt-get install cu
chown uucp /dev/ttyUSB0
cu -s 115200 -l /dev/ttyUSB0
  1. In the u-boot console, “printenv bootcmd_mmc” will output something like this:
mmcinit; ext2load mmc 0:1 0x00800000 /uImage; ext2load mmc 0:1 0x01100000 /uInitrd
  1. The trick is to add another ‘mmcinit;’ step to give the device some extra time to initialize. Copy the value from step 3, add in another “mmcinit; ” like this:
setenv bootcmd_mmc 'mmcinit; mmcinit; ext2load mmc 0:1 0x00800000 /uImage; ext2load mmc 0:1 0x01100000 /uInitrd'
  1. Now save the u-boot environment and reboot:

Adapted from and

Mon, 17 Sep 2012 00:00:00 +0200 <![CDATA[Loopback mounting an image file with partitions]]> Loopback mounting an image file with partitions

When you use ‘dd’ to image an entire disk to an image file you need to calculate an offset to loopback-mount a specific partition. Steps:

  1. Use fdisk to print the relevant info
fdisk -ul disk.img

Look for this line:

Units = sectors of 1 * 512 = 512 bytes
  1. Suppose we want to mount the second partition, look at the ‘Start’ column of the fdisk output:
   Device Boot      Start         End      Blocks   Id  System
disk.img1              16        8255        4120   83  Linux
disk.img2            8256    15659007     7825376   83  Linux
  1. To mount the second partition execute this command:
mount -o loop,ro,offset=$((512*8256)) disk.img /mnt/tmp

The trick here is to calculate the offset as UNIT_SIZE*PARTITION_START. Happy restoring!

Mon, 17 Sep 2012 00:00:00 +0200 <![CDATA[.ecryptfs recovery]]> .ecryptfs recovery

I’m experimenting with ecryptfs, a tool that uses transparant file-level encryption.

What I was wondering about is: what happens when you delete the “$HOME/.ecryptfs” directory? As it happens, the recovery is easy (as long as you have the mount passphrase safely recorded somewhere):

  1. Optional: move $HOME/.ecryptfs dir out of the way
  2. mv $HOME/.Private $HOME/.OldPrivate
  3. ecryptfs-setup-private
  4. Enter your login passphrase (to unlock the keyring)
  5. Enter your old mount passphrase
  6. Move all files from $HOME/.OldPrivate into $HOME/.Private
  7. ecryptfs-mount-private

And there are your files again!

Thu, 09 Aug 2012 00:00:00 +0200 <![CDATA[Undeleting a partition]]> Undeleting a partition

I recently made the mistake of accidentally deleting the wrong partition in the Windows disk management feature. This tool allowed me to scan/recover the partition without incident:

They have binaries for almost every operating system available and it works great if you haven’t touched the disk after you deleted the partion. Phew…

Wed, 23 May 2012 00:00:00 +0200 <![CDATA[Checking out WebRTC with git]]> Checking out WebRTC with git

The default instructions for getting started with WebRTC (can be found here) use the SVN repository to check out.

As it turns out, there is also a git version of the repository:

But how to use it? We have to adapt the gclient command a bit because all the tools expect WebRTC to be in the ‘trunk’ directory. Here’s the magic bit that makes it all work:

mkdir webrtc_git
cd webrtc_git
gclient config --name=trunk
gclient sync
cd trunk

On Linux don’t forget to install both ALSA and PulseAudio libs:

sudo apt-get install libpulse-dev libasound2-dev
Tue, 08 May 2012 00:00:00 +0200 <![CDATA[Debian fonts]]> Debian fonts

For my own reference: do “sudo apt-get install ttf-liberation” to get some sensible fonts on Debian :)

And if I end up on ubuntu 12.04 LTS, do this for firefox fix:

sudo rm /etc/fonts/conf.d/10-hinting-slight.conf
sudo ln -s /etc/fonts/conf.avail/10-hinting-full.conf /etc/fonts/conf.d/
Wed, 02 May 2012 00:00:00 +0200 <![CDATA[Informit: Interview with C++ Author Nicolai Josuttis]]> Informit: Interview with C++ Author Nicolai Josuttis

An interesting interview with Nicolai Josuttis (of “The C++ Standard Library: A Tutorial and Reference” fame) can be found here:

Well that was not something I’d expect from a “famous” guy like Nicolai. I very much enjoyed the first edition of the book but I will be a bit sceptical to buy this updated version. While I have no doubt that the writer is able to understand all the additions in C++11 just by experimenting and write a decent book about it, I’m not sure if it will contain the most practical advice due to his lack of real-world use of these features.

However, one thing that echoes from the interview and resonates quite well with me is that overall the language is improving in big steps, it is also growing more complicated and complicated with each new version. The design-by-committee approach is leaving us with a specification that only a few persons on the planet can actually keep in their head and use. Hiding new stuff behind slightly-changed operators and other gimmicks decrease the readability of code and makes you wonder what hidden features they added this time that can and will bite you in the ass :)

I love C++ but it’s obviously getting so complicated that even a well-known expert like Nicolai is having problems following it all. On the other hand, it provides a lot of fun exploration for programmers like me so I’m off to play with the new features :)

Wed, 14 Mar 2012 00:00:00 +0100 <![CDATA[Debian snapshots]]> Debian snapshots

This is one site I constantly forget about:

It contains a daily snapshot of the Debian repositories, very useful for retrieving older .deb versions!

Thu, 23 Feb 2012 00:00:00 +0100 <![CDATA[Simple chroot instructions for debian squeeze]]> Simple chroot instructions for debian squeeze

These commands create a very basic chroot environment on my ubuntu 10.04 laptop. It’s nice to create a dedicated build environment, isolate an application or (in my case) test building/deployment on debian machines locally.

On host machine, execute as root!:

# install deboostrap, only need to run this once
apt-get install debootstrap

# create chroot target dir, replace with desired name
cd /opt
mkdir squeeze_chroot

# install debian squeeze 64-bit, will take some time and download packages
debootstrap --arch amd64 squeeze /opt/squeeze_chroot/

# edit /etc/fstab, add these lines (I'm not mounting /home) and save file:
/tmp            /opt/squeeze_chroot/tmp  none   bind            0       0
/proc           /opt/squeeze_chroot/proc proc   defaults        0       0
/dev            /opt/squeeze_chroot/dev  none   bind            0       0

# bind-mount the chroot stuff
mount -a

Now a basic chroot environment is created, let’s enter it and customize it a little

# change into chroot
chroot /opt/squeeze_chroot

# you'll now be in the / directory of your chroot

# fix some basic stuff
apt-get update
apt-get install locales

# Select only en_US and en_GB variants. Choose en_US.UTF-8 as default
dpkg-reconfigure locales

So that’s it, you’re good to go. Install things like ‘build-essential’, ‘subversion’ and ‘git-core’ at your own convenience. Your homedir will be /root.

There’s plenty of customization to do (remember, this is a full working Debian install) but I’ll leave that as an exercise for the reader :)

If you want to exit your chroot, just enter

Mon, 06 Feb 2012 00:00:00 +0100 <![CDATA[Book review: Toy Stories (James May)]]> Book review: Toy Stories (James May) ../../../_images/toystories.jpg

Since I was forced to rest a bit this week, I finally had the time to finish this book. Even though it’s not hardcore tech, it still tickled my Nerd interest so here’s my review.

Toy Stories by James May (of Top Gear fame) gives the background stories and detailed information to accompany the highly recommended BBC series of the same name.

The premise is that May takes 6 toys from his youth and applies them on a massive scale in today’s world. The toys are:

  • Plasticine
  • Meccano
  • Airfix
  • Hornby model railway
  • Scalextric (racing)
  • Lego

The complete history of (and stories behind) each toy is described, which makes for detailed but interesting reading material. After this, there are some behind the scenes looks at each individual “stunt”. Most of this material does not duplicate of the TV series so watching the episodes is recommended for the full picture.

I’ve really enjoyed this book, especially the “Build a full size bridge from Meccano”, “Recreate an old railway line with H0 gauge trains and rails” and “Build a full size house from Lego” chapters. Even the toys that I have no affinity with (for instance Plasticine) were fun to read about just because of the informal (and sometimes funny) writing style of James May. The history of each toy comes to life and is put in a proper perspective with regard to the time they were invented.

The only negative note is the last chapter on Lego, it seems a bit rushed and is not as detailed as the rest of the chapters. For the rest, it’s a fun book to read and it is illustrated with a lot of nice pictures. Highly recommended!

Thu, 02 Feb 2012 00:00:00 +0100 <![CDATA[Book review: Version Control by Example (Eric Sink)]]> Book review: Version Control by Example (Eric Sink) ../../../_images/1802_image001.jpg

Since the project that I’m involved in is moving from Subversion to GIT, I was looking for a nice book to get me started. A tip on Hacker News pointed me to this book, these guys are even nice enough to send you a free copy (no strings attached).

The book starts out with a thorough description of current second generation Version Control Systems (VCS). By this, they mean that the repository is centralized on a server somewhere (CVS, SVN). A list of generic commands is then formulated and it’s filled in for SVN (things like checkout, commit, revert etc). After this, one chapter is spent on detailing an example workflow with 2 persons in SVN. The examples are complete and easy to follow.

The second part of the book starts with general information about Distributed VCS solutions, including the pros and cons of such a solution. After these general chapters, systems like Mercurial, GIT and Veracity (the VCS made by the guys who wrote this book) are each detailed in their own chapter. For each VCS, a workflow example that matches the SVN example is given so you can easily compare between the different systems. For each system the generalized table of commands is filled in with the proper equivalent for each tool.

I found this book contained a nice introduction to GIT (even though I might need some more advanced tutorials to really get it) and by re-using the example workflow, the comparison to SVN was easy. The writing style is clear and informal. The jokes are a bit lame but they lighten up what would probably otherwise be a very dry book! I’d definitely recommend it to people who need to get started with SVN or newer systems like GIT.

Thu, 02 Feb 2012 00:00:00 +0100 <![CDATA[Switchboard: a curl-like tool for XMPP]]> Switchboard: a curl-like tool for XMPP

I’m doing some testing and had the need for a scriptable XMPP client (for instance to create a lot of accounts automatically). I’ve ended up with Switchboard and I really like it a lot! Example invocation:

benjamin@benjamin-laptop:/var/lib/gems/1.8/bin$ ./switchboard -j -p somepassword roster list
=> Switchboard started.
user@somethingcom's roster:
Shutdown initiated.

Easy right! Just do this on ubuntu/debian to get it:

sudo apt-get install rubygems
sudo gem install switchboard
cd /var/lib/gems/1.8/bin

I’m not very familiar with Ruby apps (probably this needs to be added to your path) but the tool works nicely!

More extensive documentation can be found here

Thu, 15 Sep 2011 00:00:00 +0200 <![CDATA[Boost preprocessor + enums]]> Boost preprocessor + enums

Update 2015

I’ve received a very friendly e-mail this weekend from Anton Bachin, the author of the better enums library. Some time has passed since I originally wrote the post below and C++ has improved quite a lot in the meantime, his library seems a much nicer solution! So feel free to read along below but if you have a need for this functionality, definitely consider using his library instead. Thanks Anton for bringing it to my attention!

Original post

A recurring problem in C++ is printing out enum values. Of course you can just do ‘std::cout << enumval << std::endl;’ but that will only print the numeric value. For logging purposes it would be nice to print out the textual representation of the enum value.

Usually what people do is add some kind of utility ‘toString’ method and add a load of ‘if’/’case’ statements that will match all enum values and return a string. I found this to be error-prone, because you will need to update both the enum and this utility function at the same time to keep consistent. So I thought about this for a while and decided perhaps the Boost preprocessor library could come to the rescue!

Check out this sample code:

#include <string>
#include <iostream>
#include <boost/preprocessor.hpp>

// Used in toString() method
#define ENUM_TO_STR(unused,data,elem) \
if (parm == elem) return BOOST_PP_STRINGIZE(elem);

class EnumTest
   // Need to undef if because you might have
   // multiple enum definitions in a file
#undef SEQ

   enum SampleEnum

   static const std::string toString(const SampleEnum parm)
      return "_INVALID_";

int main()
   EnumTest::SampleEnum t1 = EnumTest::VALUE1;
   std::cout << EnumTest::toString(t1) << std::endl;

Some comments:

  1. The enum is generated by the BOOST_PP_SEQ_ENUM macro, which relies on a preprocessor definition called ‘SEQ’ to contain a list of values. These values should be encapsulated in () braces.
  2. The static ‘toString’ method uses the BOOST_PP_SEQ_FOR_EACH macro. This macro repeats a specified statement (in this case the ENUM_TO_STR macro) for each element in SEQ. The ‘~’ will be passed as additional data to the ENUM_TO_STR macro and put in the ‘unused’ parameter. I don’t use this functionality here but it could be useful in other places :)

If you want to see the generated code from the preprocessor here is the result (the ‘if’ statements are a bit messy, I could insert a newline there perhaps):

enum SampleEnum

static const std::string toString(const SampleEnum parm)
   if (parm == _INVALID_) return "_INVALID_"; if (parm == VALUE1) return "VALUE1"; if (parm == VALUE2) return "VALUE2"; if (parm == VALUE3) return "VALUE3"; if (parm == _MAX_) return "_MAX_";
   return "_INVALID_";

It’s not extremely pretty but it works :)

Tue, 09 Aug 2011 00:00:00 +0200 <![CDATA[Quick and dirty ‘find’ and ‘du’ trick]]> Quick and dirty ‘find’ and ‘du’ trick

Want to know how much space the files you found with the ‘find -name’ command occupy? Try this:

find -name \*SOMEPATTERN\* -print0 | du -c --files0-from=-

What happens here is:

  1. Add the ‘-print0’ parameter to the find command to use 0 instead of newline after each file
  2. Add the ‘–files0-from=-‘ to instruct ‘du’ to read a 0-terminated list of filenames, the ‘-‘ specifies ‘read from stdin’


Thu, 28 Jul 2011 00:00:00 +0200 <![CDATA[Who’s staring at who?]]> Who’s staring at who? ../../../_images/16052011397.jpg ]]> Tue, 17 May 2011 00:00:00 +0200 <![CDATA[Retrieving load averages in your C/C++ program]]> Retrieving load averages in your C/C++ program

Instead of parsing /proc/loadavg directly, there is a nice convenience function in <cstdlib> (or stdlib.h if you’re using C):

#include <cstdlib>
#include <iostream>

double averages[3];
std::cout << getloadavg(averages, 3) << " elements retrieved (should be 3)" << std::endl;
std::cout << "Average 1-min: " << averages[0] << std::endl;
std::cout << "Average 5-min: " << averages[1] << std::endl;
std::cout << "Average 15-min: " << averages[2] << std::endl;
Wed, 16 Mar 2011 00:00:00 +0100 <![CDATA[Need a graphical diff tool? Try Meld!]]> Need a graphical diff tool? Try Meld!

When I still ran Windows, I used ‘beyond compare’ a lot for directory and file comparisons. For working with SVN the tortoise SVN diff viewer was my preferred choice.

On Linux, I eventually settled for ‘kdiff3’ but it’s a quirky program that complicates things very badly. There is merge support in there but it’s pretty horrible. For SVN, I used eclipse (since I’m already using CDT anyway) and that’s pretty good!

In my search for a directory/fill new tool, I stumled upon ‘Meld’ ( It provides 2-way or 3-way file and directory diffs and as a bonus it can also diff against a VCS. It’s clean, simple and no-nonsense. The merging functionality is easy to use and looks a lot like the Eclipse one. Highly recommended!

Mon, 24 Jan 2011 00:00:00 +0100 <![CDATA[Generating test files on windows]]> Generating test files on windows

Need to generate a test file of a certain size on a Windows machine? On linux you’d use the ‘dd’ tool and use /dev/zero as an input. Luckily, there is an equivalent on Windows:

fsutil file createnew <filename> <size in bytes>

So for example,. this will create a 1MB test file:

fsutil file createnew test.bin 1048576

Nice for testing harddrives that give disk failures :)

Thu, 20 Jan 2011 00:00:00 +0100 <![CDATA[Specific CXXFLAGS for each makefile target]]> Specific CXXFLAGS for each makefile target

The problem: my Makefile has two targets. One for a production buid and one for a unittest build. For the unittest build, I want to specify different CXXFLAGS (disable optimization for instance). This turns out to be difficult!

One option is to specify some variable on the make command line like this:

make test TESTFLAGS=1

and then to check in the Makefile for its existence:

# Generic flags shared by all builds

# Check for test flag
ifeq ($(TESTFLAGS),)
   #empty flags, release setting
   CXXFLAGS += -O2
   #debug setting
   CXXFLAGS += -g3 -ggdb -O0 -fprofile-arcs -ftest-coverage

Of course, this is cumbersome! So I figured out you can specify requirements for each build target spanning multiple lines. This allows you to specify it like this:

#unittest target
$(TEST_TARGET): CXXFLAGS += -g3 -ggdb -O0 -fprofile-arcs -ftest-coverage
$(TEST_TARGET): $(OBJS) unittest.o $(TEST_OBJS)

#production build
$(TARGET): $(OBJS) main.o $(TEST_OBJS)

Of course the same trick will work for other variables like LDFLAGS. I’ve been searching and experimenting for this trick a long time, I thought sharing this would be nice :)

Wed, 19 Jan 2011 00:00:00 +0100 <![CDATA[Sheevaplug: end_request: I/O error, dev mtdblock0]]> Sheevaplug: end_request: I/O error, dev mtdblock0

When booting my Sheevaplug the dmesg output shows a number of these errors:

end_request: I/O error, dev mtdblock0, sector 64
uncorrectable error :
uncorrectable error :

As it turns out, nothing is wrong. It’s just Linux telling you that it cannot auto-mount a partition that you specified in /etc/fstab. Solutions:

  • Check if all USB/ESATA drives are connected
  • Check if you specified the right UUID or device
  • Add a ‘noauto’ specifier to the offending partition if you still can’t find the error
Wed, 22 Dec 2010 00:00:00 +0100 <![CDATA[Implementing a thread event loop using boost::bind and boost::function]]> Implementing a thread event loop using boost::bind and boost::function

Recently I was thinking about how to build a thread that has an event loop internally. If another thread wants to send us a message and have some function executed inside the event loop, how would I solve that?

Poor man’s solution

The poor man’s solution would be to have the public functions (called from another thread context) construct some kind of message struct/class that contains a ‘type’ field indicating which message should be called and some storage for the parameters. This object will then be pushed into the event queue for the event loop to read. The event loop can switch on the ‘type’ field and execute the proper method in its own context.

The problem is that this approach requires a lot of administrative overhead, if we ever change or add methods we need to update the ‘type’ list and update the switch() statement in the event loop. Cumbersome and error-prone!

Nicer solution using boost::bind and boost::function

So that brings me to my solution: what if there was some way to have the public function construct a function-pointer like object that also stores it’s arguments in the same object? We could put these objects in a thread safe queue and have the event loop just blocking read on that queue. When the event loop notices a new event, it takes the function pointer and executes it in its own thread context.

Enter boost::function! It’s a very clever function pointer that can store arbitrary arguments as well. To create such an object, we use boost::bind to store the parameters and point it to the right object to execute the member function on. Sample code (scroll to the right if text disappears):

#include <iostream>
#include <queue>
#include <string>
#include <boost/function.hpp>
#include <boost/bind.hpp>

 * This is the function pointer we will store and call.
 * It just means it's a function with void return type and no params.
 * Slightly confusing perhaps, but see below that we can actually still
 * bind method parameters and store their values.
 * This definition is only about how we should call the pointer from
 * the event loop, bound parameters do the rest.
typedef boost::function<void(void)> FunctionPointer;

 * Our stupid class
class EventProcessor
    * Pretend these methods will be called from another thread
    * NOTE: no locking is implemented here for simplicity

   void dostuff1(int x)
      /* All C++ member functions secretly have a 'this' pointer
       * as first param, we need to bind that to the object where
       * we want to execute our member function. That's what the first
       * two bind params are about. In this case we just specify the
       * 'this' value of the current object but it might well
       * be another object
       * First argument: bind to this member function
       * Second argument: the member function will be called on this object
       * Third argument: store the value of x
       * NOTE: We're actually storing a pointer to a private member
       * function here, probably because of magic trickery done by
       * boost::bind/boost::function we get around that :)
      FunctionPointer f = boost::bind(
            &EventProcessor::dostuff1_impl, this, x);

   void dostuff2(int y)
      FunctionPointer f = boost::bind(
            &EventProcessor::dostuff2_impl, this, y);

   void dostuff3(std::string &text)
      /* Hey look at this trick, we have a reference to string
       * but still a complete copy is stored
      FunctionPointer f = boost::bind(
            &EventProcessor::dostuff3_impl, this, text);

    * This is normally running inside a thread internal to our EventProcessor
    * But we keep it simple so we call it from our main function as well
   void eventloop()
      while (!events.empty())
         FunctionPointer f = events.front();
   /* Actual implementation functions called from event loop,
    * all methods run on the internal thread */
   void dostuff1_impl(int x)
      std::cout << "dostuff1 " << x << std::endl;
   void dostuff2_impl(int x)
      std::cout << "dostuff2 " << x << std::endl;
   void dostuff3_impl(std::string &text)
      std::cout << "dostuff3 " << text << std::endl;
   std::queue<FunctionPointer> events;

int main()
   EventProcessor p1;

   // In this part of the code no methods are executed yet

   // a copy is performed as you will see later on
   std::string payload = "lama";

   /* if it was still a reference, when we execute the loop
    * we would be seeing "test123" as dostuff3 text */
   payload = "test123";

   // Now iterate over the queue and execute each pointer

Please note that there are no real threads in this example, I left them out for clarity. Output:

dostuff1 100
dostuff2 2001
dostuff3 lama

For more info on boost::bind see my earlier post at Graphical explanation of boost::bind

Wed, 24 Nov 2010 00:00:00 +0100 <![CDATA[Profiling FreeBSD system usage]]> Profiling FreeBSD system usage

This is a very nice guide to determining where the system load on a FreeBSD server comes from:

Wish I had something like this on linux :)

Fri, 11 Jun 2010 00:00:00 +0200 <![CDATA[SIGSEGV tracing]]> SIGSEGV tracing

Suppose your dmesg says this:

[832542.638297] XXX[3140]: segfault at 87 ip 483495 sp 7fffffffb920 error 4 in XXX[400000+26e000]

How to make sense of that? Easy! Check and execute this:

addr2line -e /path/of/XXX 483495

The ip value will tell you where the crash occurred!

Also, when your program is not allowed to write in the current directory execute this as root beforehand to change the location of core files:

echo 1 > /proc/sys/kernel/core_uses_pid
echo /tmp/core > /proc/sys/kernel/core_pattern
Fri, 04 Jun 2010 00:00:00 +0200 <![CDATA[Beavering away..]]> Beavering away..

Blatantly stolen from a nice picture about a busy beaver :)

../../../_images/funny-signs-007.jpg ]]>
Wed, 12 May 2010 00:00:00 +0200 <![CDATA[Real-time and Embedded Systems, Call Flows and Object Oriented Design]]> Real-time and Embedded Systems, Call Flows and Object Oriented Design

For lots (and I mean lots) of telecom and TCP/IP sequence diagrams check this site:

Also a lot of OO design and patterns there, so if I ever get bored….;-)

Mon, 03 May 2010 00:00:00 +0200 <![CDATA[Abstracted list of tips from “The Pragmatic Programmer”]]> Abstracted list of tips from “The Pragmatic Programmer”

Don’t want to read the whole book? :)

Here is a list describing the most important points:

Wed, 28 Apr 2010 00:00:00 +0200 <![CDATA[Development Environment Tips]]> Development Environment Tips

Nice article about setting up a good workspace for us software developers:

Wed, 28 Apr 2010 00:00:00 +0200 <![CDATA[Profiling STL added in gcc 4.5]]> Profiling STL added in gcc 4.5

New gcc 4.5 adds a special profile mode to indicate problems with your program’s usage of the STL library:

Can’t wait to try it :)

Wed, 14 Apr 2010 00:00:00 +0200 <![CDATA[Graphical explanation of boost::bind]]> Graphical explanation of boost::bind

Chris Kohlhoff has a nice explanation about the often misunderstood functionality of boost::bind (or std::bind in the upcoming standard). Binding is something that is used quite often so it’s well worth the time to try and understand it :)

Wed, 07 Apr 2010 00:00:00 +0200 <![CDATA[Lots of free linux books]]> Lots of free linux books

A nice overview of free linux books can be found here:

  1. 20 of the Best Free Linux Books
  2. 12 More of the Best Free Linux Books
Wed, 07 Apr 2010 00:00:00 +0200 <![CDATA[Dealing with file descriptor leak in Eclipse + CDT]]> Dealing with file descriptor leak in Eclipse + CDT

Current eclipse builds have the very nasty tendency to leak file descriptors to /usr/share/mime/globs (especially when using CDT). After a while you hit the max fd limit and eclipse will refuse to save your file (and workbench state).

Luckily, under linux you can save the day using ‘gdb’:

  1. Find offending fd’s:
cd /proc/`pidof java`/fd
ls -la|grep globs

Write down about 5 fd numbers

  1. Attach with GDB:
gdb -p `pidof java`

For each fd enter:

p close(XXX)

where XXX is the fd number. Press enter after each line

  1. now type ‘c’ to continue and ‘quit’ to exit
  2. Quickly save your stuff in eclipse and exit/restart


Thu, 01 Apr 2010 00:00:00 +0200 <![CDATA[Removing elements that match a specified criterium from a vector]]> Removing elements that match a specified criterium from a vector

Deleting items that match a specific criterium from a C++ vector is not as straightforward as it seems. The solution? First define a function that determines if the element should be deleted:

bool bla(child_t &child)
        return child.first == -1;

And then use our friends erase and remove_if:

std::vector<child_t> children;
children.erase(remove_if(children.begin(), children.end(), bla), children.end());

Easy right….:)

Thu, 11 Mar 2010 00:00:00 +0100 <![CDATA[Popular mechanics]]> Popular mechanics

Really cool stuff, all issues of the US magazine ‘Popular Mechanics’ going back to 1926 are available for free on google books:

The scans are complete and of really good quality, also nice looking at ads from the 1950’s :)

Thu, 11 Mar 2010 00:00:00 +0100 <![CDATA[Mocking about]]> Mocking about

Lately I’ve been fascinated by the opportunities a mock framework gives me to speed up unit test development (see wikipedia for a more detailed explanation).

So far, I’ve tried to write my own using the C++ preprocessor and I almost succeeded. The following works just fine for simple virtual members:

/** Define a mock function with 1 parameter */
#define MOCK1(returntype, name, type1, parm1, retval)\
      DefaultZeroInt _##name##_callcount;\
      type1 _##name##_##parm1;\
      /* Function definition*/\
      virtual returntype name(type1 parm1)\
         _##name##_##parm1 = parm1;\
         return retval;\
      /* Mock utility functions*/\
      void name##_reset()\
         _##name##_callcount = 0;\
         _##name##_##parm1 = type1();\
      bool name##_called_once(type1 parm1)\
         return ((_##name##_callcount == 1) && (_##name##_##parm1 == parm1));\
      bool name##_last_params_match(type1 parm1)\
         return ((_##name##_callcount > 0) && (_##name##_##parm1 == parm1));\
      int name##_call_count()\
         return _##name##_callcount;\

class DefaultZeroInt
   DefaultZeroInt() : val(0) {}
   operator int() { return val; }

   DefaultZeroInt& operator= (const int f)
      val = f;
      return *this;

   DefaultZeroInt& operator++ (void)
       return *this;

   int val;

And of course there are variations for multiple parameters (just expanded versions of this 1-argument example). This works fine for arguments like string, int, etc. But not for pointer parameters or reference parameters!

So after some fun time trying to make this work, a little voice in my head started telling me to look around for an existing library. I don’t consider it wasted time to have tried myself, it will help me evaluate other libs better :)

As far as I’m concerned, there are two serious candiates:

  1. Google mock objects (new BSD license)
  2. Hippomocks (LGPL)

Both are actively maintained. My evaluation:

  • Gmock requires linking to their library code, Hippomocks is header-only and therefore very easy to include.
  • Gmock includes gtest lib and that defines ‘TEST’. This rules out a combination with UnitTest++ because of symbol conflicts. The trick is to include gmock first, then #undef TEST and then include UnitTest++. Not nice!
  • Gmock by default doesn’t require the expectations to be met in the order they are specified. It’s possible but requires extra configuration. Hippomocks does do this by default which is nicer in my opinion.
  • Gmock uses macros to define mock method implementations, Eclipse can expand and autocomplete them just fine. Hippomocks is a bit trickier so no autocomplete there.
  • Gmock is tightly coupled with gtest, some pre-main() handholding is necessary to make it work with my current unit test framework of choice (UnitTest++). It’s documented quite well so this is not a big problem./
  • GMock documentation is extensive and includes a big tutorial and a big cookbook for almost any scenario you might want to implement. Hippomocks documentation also includes a very nice, short and to-the-point tutorial. I was up and running much quicker with Hippomocks.
  • Specification of mock method behaviour is more versatile in Gmock but also a little bit more complex than Hippomocks. Somehow the Hippomocks way feels a little bit more natural.

Conclusion: I wil try to use Hippomocks, it seems like an excellent fit for my usage scenario’s at the moment. If I need something more advanced, a migration to Gmock could be an option

Thu, 28 Jan 2010 00:00:00 +0100 <![CDATA[How to temporarily redirect std::cin from a std::stringstream]]> How to temporarily redirect std::cin from a std::stringstream

This is nice for testing methods that rely on std::cin programatically:

#include <iostream>
#include <sstream>

int main()
   // Prepare fake stringstream
   std::stringstream fake_cin;
   fake_cin << "Hi";

   // Backup and change std::cin's streambuffer
   std::streambuf *backup = std::cin.rdbuf(); // back up cin's streambuf
   std::streambuf *psbuf = fake_cin.rdbuf(); // get file's streambuf
   std::cin.rdbuf(psbuf); // assign streambuf to cin

   // Read something, will come from out stringstream
   std::string input;
   std::cin >> input;

   // Verify that we actually read the right text
   std::cout << input << std::endl;

   // Restore old situation
   std::cin.rdbuf(backup); // restore cin's original streambuf
Tue, 26 Jan 2010 00:00:00 +0100 <![CDATA[Preventing shared_ptr from eating your singleton objects]]> Preventing shared_ptr from eating your singleton objects

Suppose you have a simple (non-threadsafe) singleton object like this:

class singleton
   static singleton *getInstance()
      if (instance == NULL)
         instance = new singleton();
      return instance;

   void doStuff()
      std::cout << "doing stuff" << std::endl;

   singleton()   {  }
   static singleton *instance;

singleton *singleton::instance = NULL;

Invoking this is easy:

singleton *s1 = singleton::getInstance();

What happens if you decide to switch to boost::shared_ptr all over your app and convert your singleton pointer full of enthousiasm:

boost::shared_ptr<singleton> s1(singleton::getInstance());

This will work until you reach the end of your scope, the shared_ptr will then call the destructor on your object! Wait a minute, that’s not what we want to happen here right!

The solution is actually quite simple. The singleton object here has a default public destructor (generated by the compiler). So we just add a private destructor like this:

class singleton2
   static singleton2 *getInstance()
      if (instance == NULL)
         instance = new singleton2();
      return instance;

   void doStuff()
      std::cout << "doing stuff2" << std::endl;

   singleton2()   {  }
   ~singleton2()  { // THIS WILL PREVENT USAGE OF SHARED_PTR!! }
   static singleton2 *instance;

singleton2 *singleton2::instance = NULL;

The cool thing is that this forces you to use a regular pointer (that will still work fine). If you try to use the class with the private destructor in combination with a shared_ptr the compiler will automatically complain:

shared_test.cpp: In function 'void boost::checked_delete(T*) [with T = singleton]':
/usr/include/boost/detail/shared_count.hpp:86:   instantiated from 'boost::detail::shared_count::shared_count(Y*) [with Y = singleton]'
/usr/include/boost/shared_ptr.hpp:149:   instantiated from 'boost::shared_ptr<T>::shared_ptr(Y*) [with Y = singleton, T = singleton]'
shared_test.cpp:108:   instantiated from here
shared_test.cpp:27: error: 'singleton::~singleton()' is private
/usr/include/boost/checked_delete.hpp:34: error: within this context

Cool stuff! It’s not pretty but at least it keeps your singletons safe…

Fri, 23 Oct 2009 00:00:00 +0200 <![CDATA[Thread safety of std::string]]> Thread safety of std::string

A thing that has been in the back of my mind for quite a while: is std::string threadsafe? Normal STL containers are not but std::string internally uses a copy-on-write mechanism which can screw things up (if the bookkeeping is not adequately protected). Thankfully, StackOverflow comes nicely to the rescue here:

Tue, 20 Oct 2009 00:00:00 +0200 <![CDATA[Some thoughts on unit testing]]> Some thoughts on unit testing

A lot has been said about Test Driven Design (TDD) and unit testing in general. In my opinion it is quite simple:

  1. When you write regular code you usually write some code anyway to exercise it (to see if it’s working properly). Instead of throwing away this code when development is finished, just wrap it in a unit test! It gives you at least a basic coverage and it is not much extra work..
  2. Start at the bottom, test small parts first and then bigger assemblies of small parts. Don’t fall into the trap to depend on external systems (been there, done that). That’s a good signal that the code needs to be refactored! External functionality should be but in a separate interface that can be faked during unit testing (if possible).
  3. Any test you write is better than nothing. So when starting with an existing codebase, just write tests for parts you have to change. In a while the coverage will automatically go up (or maybe help it along by implementing some basic tests).

Don’t be religious about it :)

Interesting book I’m reading right now: Working effectively with legacy code by Michael Feathers

Fri, 16 Oct 2009 00:00:00 +0200 <![CDATA[Interesting C++/Boost/software engineering blogs]]> Interesting C++/Boost/software engineering blogs

Here are the RSS feeds for a number of blogs I follow to know what’s going on in the world of C++/Boost development (and software engineering in general):

Fri, 16 Oct 2009 00:00:00 +0200 <![CDATA[Function pointer to class member with boost::function]]> Function pointer to class member with boost::function

A function pointer to a class member is a problem that is really suited to using boost::function. Small example:

#include <boost/function.hpp>
#include <iostream>

class Dog
   Dog (int i) : tmp(i) {}
   void bark ()
      std::cout << "woof: " << tmp << std::endl;
   int tmp;

int main()
   Dog* pDog1 = new Dog (1);
   Dog* pDog2 = new Dog (2);

   // Important: the Dog* parameter is actually used to set 'this' when the member function is called
   boost::function<void (Dog*)> f1 = &Dog::bark;

Thu, 15 Oct 2009 00:00:00 +0200 <![CDATA[std::map with pointers]]> std::map with pointers

Sometimes I want to use a std::map to store pointers to objects. The problem however is that this does not work:

std::map<std::string, char*> map1;
if (map1["key"] != NULL)
  // object seems to exist, this will not work though..

A pointer that is not explicitly assigned a value holds a random value by default. Running the sample multiple times and printing the pointer clearly shows that. How I wish there was a pointer that can make a distinction between valid/invalid. Hold on…

std::map<std::string, boost::shared_ptr<char> > map2;
if (map2["key"])
   // object really exists! hooray

Why does this work? Well that’s because boost::shared_ptr has an implicit conversion operator to bool that tells us if the shared_ptr was properly assigned a value (or not). The default value is ‘false’ because it’s an empty shared_ptr. Problem solved!

Thu, 15 Oct 2009 00:00:00 +0200 <![CDATA[C++ static checking]]> C++ static checking

As a C++ programmer I’m often a bit jealous at my colleagues that use Java. Their static checking tools (PMD, Findbugs) are really nice!

For C++ the amount of checkers available is pretty low..I could only find one actually but it is pretty nice. Meet Cppcheck

It seems a reasonable tool and has already pointed out some mainly style-related issues. Great stuff!

Fri, 02 Oct 2009 00:00:00 +0200 <![CDATA[Update problems with Eclipse Galileo 3.5.1 SR1]]> Update problems with Eclipse Galileo 3.5.1 SR1

I tried to install some features today on my bare install of the newly released Eclipse 3.5.1 and got some very nasty messages about:

“No repository found containing: osgi.bundle,org.eclipse.jdt.core,3.5.1.v_972_R35x”

Solution: Uncheck “Contact all update sites during install to find required software” box.

Thanks to

Mon, 28 Sep 2009 00:00:00 +0200 <![CDATA[Easy linux install from USB flash stick]]> Easy linux install from USB flash stick

Tired of having to hunt around for intructions on installing Linux from an USB flash stick (except Ubuntu of course, which has a tool right on their live CD)? I found the answer:

This handy little tool will download the ISO of your choice (the list is huge) and format your USB stick so that you only have to boot from it!

Excellent stuff, will come in really handy for installing Linux on my nettop without optical drive ;-)

Wed, 16 Sep 2009 00:00:00 +0200 <![CDATA[97 things]]> 97 things

O’Reilly is adding another ‘97 things’ book, this time about things programmers should know. Interesting reading here:

Mon, 07 Sep 2009 00:00:00 +0200 <![CDATA[The four phases of <insert random library here> integration]]> The four phases of <insert random library here> integration

I’m currently working on integrating a 3rd party library in my own code and today I realized that there are 4 typical phases in such a process (at least for me):

  1. Try to understand library
  2. Fail at step 1, try to write my own. How hard can it be right?
  3. Realise that step 2 is actually quite a lot of work and that my solution is even worse than the original library.
  4. Gain more appreciation for the original library and start to use it (mixed with the good parts of the stuff I wrote myself) :)
Wed, 17 Jun 2009 00:00:00 +0200 <![CDATA[Creating debian packages]]> Creating debian packages

Lately I’ve been interested in creating my own .deb files for easy distribution of my application. These two articles give a good basic tutorial:


More info can also be found here:

Interesting stuff, and basically dh_make will make importing an existing makefile very easy. Afterwards just edit debian/control and run dpkg-buildpackage -rfakeroot

Thu, 28 May 2009 00:00:00 +0200 <![CDATA[Boost C++ course]]> Boost C++ course

Just returned from a 2-day overview course of the Boost C++ library at Datasim in Amsterdam. It was a real eye-opener and I was fortunate enough to be the only student. The biggest problem with Boost in my opinion is that the documentation is mostly pretty sparse (lacking some simple examples) so this course is a nice way to get to know what’s out there and to do some basic things with each library.

I’ve learned a lot and had a great experience, I can highly recommend this course!

Fri, 15 May 2009 00:00:00 +0200 <![CDATA[Integrated intel graphics on Linux]]> Integrated intel graphics on Linux

A rather nice posting over at Heise UK about the current mess that the linux intel graphics drivers are in:–/features/113196

Tue, 05 May 2009 00:00:00 +0200 <![CDATA[C++ FAQ, coding style]]> C++ FAQ, coding style

Interesting reading material:

C++ coding guidelines from google:

Mon, 27 Apr 2009 00:00:00 +0200 <![CDATA[Guide to JACK audio daemon source code]]> Guide to JACK audio daemon source code

Interesting reads:

  1. Jack v1 source guide
  2. Jack v2 source guide

JACK is the one thing that makes Linux audio tolerable, as opposed to some other projects starting with ‘A’ and ending with ‘LSA’ which give me continuous headaches due to their complexity :)

Mon, 20 Apr 2009 00:00:00 +0200 <![CDATA[C++ mumblings]]> C++ mumblings

Well the last few days have been an interesting investigation into the details of popen() and system() and all the happy side effects when you use them from a multithreaded application. I have this strange feeling that the original unix guys really want you to use processes instead of threads…

On another subject, I’ve stumbled upon today. Seems like an interesting library for threading/sockets/etc., perhaps it is better documented than the regular boost stuff (which is powerful but for example in the case of boost::asio very badly documented)…

On a finishing note, the guys at Dr. Dobbs have put a really interesting issue of Dr. Dobbs Digest online as a PDF here: . Lots of stuff about multithreading and concurrency. It seems the future is small singlethreaded processes that communicate via message passing…which suits me fine :)

Mon, 20 Apr 2009 00:00:00 +0200 <![CDATA[Copying public SSH key to a server]]> Copying public SSH key to a server

Tired of ssh’ing into a machine, doing ‘mkdir -m700 .ssh’ and running scp to copy over your public key file? As with all things on Unix, somebody has already thought about it and provided a solution 1 or 2 decades before you even considered the problem…

ssh-copy-id -i .ssh/ username@server


Mon, 06 Apr 2009 00:00:00 +0200 <![CDATA[Some useful (Dutch) forums for my car]]> Some useful (Dutch) forums for my car

Because I don’t want to forget here are some useful sites about doing some self-maintenance on my VW Golf MKIV:

Fri, 27 Mar 2009 00:00:00 +0100 <![CDATA[Cool trick to get current TID (since gettid is not implemented on debian it seems)]]> Cool trick to get current TID (since gettid is not implemented on debian it seems)

The gettid() call doesn’t work on my system so I found this:

printf("The ID of this of this thread is: %ld\n", (long int)syscall(224));

This number can be found like this:

benjamin@benjamin-laptop:~$ grep gettid /usr/include/asm/unistd_32.h
#define __NR_gettid             224

Ripped from this thread:

Fri, 06 Mar 2009 00:00:00 +0100 <![CDATA[Fun with realtime scheduler (3)]]> Fun with realtime scheduler (3)

Well we figured out how to give our process SCHED_FIFO priority last time (see here) right? But then I started wondering about multithreaded apps and if the call to sched_setscheduler() works for the complete process or only for the current thread.

So I created a small test program with a main loop and a small thread that raises its prio to SCHED_FIFO. The main thread runs at unmodified prio. The results?

benjamin@benjamin-laptop:~$ ps -eLO pid,tid,lwp
24071 24071 24071 24071 S pts/9    00:00:00 ./a.out
24071 24071 24072 24072 S pts/9    00:00:00 ./a.out

So we have two thread ID’s: 24071 and 24072. Here’s the result from chrt:

benjamin@benjamin-laptop:~$ chrt -p 24071
pid 24071's current scheduling policy: SCHED_OTHER
pid 24071's current scheduling priority: 0
benjamin@benjamin-laptop:~$ chrt -p 24072
pid 24072's current scheduling policy: SCHED_FIFO
pid 24072's current scheduling priority: 99

So there we have our answer, the sched_setscheduler() call only works for the current TID, not for all threads…

Fri, 06 Mar 2009 00:00:00 +0100 <![CDATA[boost::shared_ptr and boost::shared_array]]> boost::shared_ptr and boost::shared_array

boost::shared_ptr is an excellent library, life is so much easier with it (but that is nothing new right…)

Anyway, I wanted to do this:

typedef boost::shared_ptr<short> MyPointer;
MyPointer bla(new short[500]);

That compiles but is not legal! We use new[] and shared_ptr uses the regular ‘delete’ instead of ‘delete[]’. Oh no!

So what’s the solution? Just use boost::shared_array instead!

typedef boost::shared_array<short> MyPointer;
MyPointer bla(new short[500]);

Now the new[] and delete[] calls match..

See also for a nice article about shared_ptr / array_ptr.

Wed, 25 Feb 2009 00:00:00 +0100 <![CDATA[C++ inheritance access specifiers]]> C++ inheritance access specifiers

The always interesting EmptyCrate blog has a nice page on C++ inheritance access specifiers. It’s good to have a nice little refresher once in a while, especially when you think you’ve reached the point where ‘protected’ or ‘private’ inheritance makes sense. It will snap you back into reality ;-)

Mon, 23 Feb 2009 00:00:00 +0100 <![CDATA[ is cool!]]> is cool!

Maybe you knew the site already, but is a really cool site where programmers can ask questions and have them answered by other members.

In my experience, the site is full of nice tips and programming snippets. I’ve even asked a question myself (here: timekeeping in linux kernel 2.6) and somebody actually took the time to answer me. Cool stuff!

Mon, 23 Feb 2009 00:00:00 +0100 <![CDATA[Linux device drivers book]]> Linux device drivers book

Since I’m always forgetting where the free O’Reilly ‘Linux Device Drivers 3rd edition’ book can be found, here’s a link:

Chapter 7 about timing is especially interesting even though the tricks with jiffies seem a bit weird? The book says to get the interval between two jiffies you can do this:

diff = (long)t2 - (long)t1;

Well if t1 is just before the MAX_JIFFY_OFFSET and t2 has wrapped around (which jiffies do due to their 32bit nature) you get a nice BIG negative number. That’s not useful at all…anyway in the meantime i’ve learned that time_after() and msecs_to_jiffies() are your best friends. The time_after/time_before macros actually take the wrapping into account.

So the tip for the day: avoid working with jiffy intervals, just let your code do stuff based on the time_before() and time_after() macros.

Tue, 17 Feb 2009 00:00:00 +0100 <![CDATA[Eclipse CDT Linux Tools Project]]> Eclipse CDT Linux Tools Project

Integration with tools like oprofile and valgrind (one of my favorites) is finally a bit easier in Eclipse since these guys released their first version :)

An overview of features can be found here:

Tue, 17 Feb 2009 00:00:00 +0100 <![CDATA[Guide to Linux sound systems]]> Guide to Linux sound systems

I came across a very interesting guide to Linux soundsystems, it’s an interesting read and fairly up to date..

Tue, 17 Feb 2009 00:00:00 +0100 <![CDATA[Basic asio example]]> Basic asio example

I had a lot of trouble find a really basic example (hey the stuff I want to do is not that hard) for using the boost::asio library. This library is supposed to be the bee’s knees and will be included in C++ TR2 (and is already included in Boost) so I thought it was worthwhile to fight with it for some time…

Here’s a sample app adapted from the included blocking_tcp_echo_server.cpp

// blocking_tcp_echo_server.cpp
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
// Copyright (c) 2003-2008 Christopher M. Kohlhoff (chris at kohlhoff dot com)
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at [url=][/url])

#include <cstdlib>
#include <iostream>
#include <boost/bind.hpp>
#include <boost/smart_ptr.hpp>
#include "asio.hpp"

using asio::ip::tcp;

const int max_read_length = 10;
const int max_response_length = 100;

typedef boost::shared_ptr<tcp::socket> socket_ptr;

void session(socket_ptr sock)
                for (;;)
                        // Reading demonstration
                        char data[max_read_length];
                        asio::error_code error;
                        size_t length = sock->read_some(asio::buffer(data, max_read_length), error);
                        if (error == asio::error::eof)
                                std::cout << "Socket closed by peer" << std::endl;
                                break; // Connection closed cleanly by peer.
                        else if (error)
                                throw asio::system_error(error); // Some other error.
                        std::cout << "Read " << length << " bytes" << std::endl;

                        // Writing demonstration
                        char response[max_response_length];
                        sprintf((char*)response, "Read %d bytes dude it rocks\n", length);
                        length = asio::write(*sock, asio::buffer(response, strlen(response)), asio::transfer_all(), error);
                        std::cout << "Written " << length << " bytes " << std::endl;
                        if (error)
                                std::cout << "Error during writing: " << error.message() << std::endl;

                        // uncomment this if you just want to echo
                        //asio::write(*sock, asio::buffer(data, length));
        catch (std::exception& e)
                std::cerr << "Exception in thread: " << e.what() << "\n";

void server(asio::io_service& io_service, short port)
        tcp::acceptor a(io_service, tcp::endpoint(tcp::v4(), port));
        for (;;)
                socket_ptr sock(new tcp::socket(io_service));
                std::cout << "Blocking in accept" << std::endl;
                std::cout << "Got a connection" << std::endl;

                // uncomment this to multithread
                // asio::thread t(boost::bind(session, sock));

                // only need one simultaneous thread

int main(int argc, char* argv[])
                if (argc != 2)
                        std::cerr << "Usage: blocking_tcp_echo_server <port>\n";
                        return 1;

                asio::io_service io_service;

                using namespace std; // For atoi.
                server(io_service, atoi(argv[1]));
        catch (std::exception& e)
                std::cerr << "Exception: " << e.what() << "\n";

        return 0;
Tue, 03 Feb 2009 00:00:00 +0100 <![CDATA[Fun with realtime scheduler (2)]]> Fun with realtime scheduler (2)

Who needs chrt when you can set things yourself from C code?


#include <stdio.h>
#include <sched.h>
#include <unistd.h>

void setscheduler(void)
        struct sched_param sched_param;

        if (sched_getparam(0, &sched_param) < 0) {
                printf("Scheduler getparam failed...\n");
        sched_param.sched_priority = 99;
        if (!sched_setscheduler(0, SCHED_FIFO, &sched_param)) {
                printf("Scheduler set to SCHED_FIFO with priority %i...\n", sched_param.sched_priority);
        printf("!!!Scheduler set to SCHED_FIFO with priority %i FAILED!!!\n", sched_param.sched_priority);

int main()

                printf("PID %i sleeping..\n", getpid());


gcc setscheduler.c -o setscheduler
Scheduler set to SCHED_FIFO with priority 99...
PID 5193 sleeping..

To confirm that it works we run chrt in another terminal:

benjamin@laptop:~$ chrt -p 5193
pid 5193's current scheduling policy: SCHED_FIFO
pid 5193's current scheduling priority: 99
Fri, 30 Jan 2009 00:00:00 +0100 <![CDATA[Fun with realtime scheduler (1)]]> Fun with realtime scheduler (1)

Things used to be complicated with regard to setting realtime permissions on Linux (realtime-lsm and /etc/security/limits.conf). Not any more it seems!

Using chrt

If your user is in the ‘audio’ group, on 2.6.28 (or earlier) you can use this command to set or view the realtime properties of a command (it can be found int he debian util-linux package). Example: run process with SCHED_FIFO and prio 99:

chrt -f 99 command

Check existing process properties:

benjamin@laptop:~/$ chrt -p 4609
pid 4609's current scheduling policy: SCHED_FIFO
pid 4609's current scheduling priority: 99
Fri, 30 Jan 2009 00:00:00 +0100 <![CDATA[Fun with timers(2)]]> Fun with timers(2)

So I wrote a number of test programs. Each program uses a ‘before’ and ‘after’ timestamp retrieved by gettimeofday().

Regular usleep

Let’s start with plain usleep(). The results:

Plain usleep(10000): lowest=10000us, highest=25512us, average=10200us

That’s what we kind of expected. Especially after the previous post..

Adaptive usleep

OK, so lets be clever and instead of sleeping 10000us each time we calculate if the previous usleep() was too long. If so, deduct the extra time from the next interval. If the usleep() was too short, we add the missing time to the next interval. The results:

Adaptive usleep(10000): lowest=11us, highest=33054us, average=10100us

Wow, as you can see the timing interval goes all over the place but the average is slightly better!


It still sucks but on average we’re getting closer to the holy grail of 10000us!

Thu, 29 Jan 2009 00:00:00 +0100 <![CDATA[Fun with timers (1)]]> Fun with timers (1)

So you’re building a C program that uses a 3rd party library which requires you to call a certain function every 10ms (10000 us)…

Seems simple right? Just do this:


This is supposed to sleep for 10000us. I’ve made a sample program that measures the difference between what we request and the actual time that usleep() sleeps.

The result is perhaps known but very annoying: linux sleeps for at least 10ms but regularly adds a random interval. On average I end up with 10200 us of sleep time but the highest value I observed in a little test run is 19313 us. One thing is guaranteed though, the sleep time is allways minimum 10000 us (as mentioned in the manpage ‘man usleep’).

The fact that this goes wrong is that the linux kernel is configured to run 1000 times/second (CONFIG_HZ=1000) and guess what, this program is not the only thing begging for cpu time. I’ve googled around and basically the advice is:

  1. Get a real RTOS –> That’s too easy
  2. Muck around with running process under SCHED_RR or SCHED_FIFO –> Been there done that but the policy mechanism in the Linux kernel has changed quite dramatically in the last years, set_rlimits is a useful tool though..

More to follow…

Thu, 29 Jan 2009 00:00:00 +0100 <![CDATA[Debian on Dell Latitude e6400]]> Debian on Dell Latitude e6400

This is a quick guide on how to install Debian on a Dell Latitude e6400 with integrated Intel GM45 graphics. Most things can be solved by downloading a new kernel :)

  1. Download netinstall Debian testing from here
  2. Install as usual, sudo -s to do the next steps as root
  3. Edit /etc/X11/xorg.conf and find the section for the video driver and edit into this:
Section "Device"
        Identifier      "intel"
        Driver          "intel"
        Option "AccelMethod" "XAA"
  1. Restart GDM
  2. Stock debian testing kernel (2.6.26-something) doesn’t support the wireless lan card included so let’s install 2.6.27. Add this to /etc/apt/sources.list:
#kernel repo
deb trunk main
  1. Run
apt-get update
apt-get install linux-image-2.6.27-1-686
  1. Microcode is needed for the wireless card to work. Do this:
wget -nH -nd
tar xzf iwlwifi-5000-ucode-5.4.A.11.tar.gz
cp iwlwifi-5000-ucode-5.4.A.11/iwlwifi-5000-1.ucode /lib/firmware
  1. Reboot into your new kernel
  2. I found that the intel video driver prevents correct suspend/resume. Go and install the 00CPU script.

10 If this still doesn’t work, upgrade to the latest stable kernel finally fixed my problems:

cd /usr/src

tar xf linux-
cd linux-

cp /boot/config-2.6.27-1-686 .config

make oldconfig

make-kpkg clean


time fakeroot make-kpkg --initrd -rev myversion1 kernel_image kernel_headers kernel_source

cd ..

dpkg -i linux-image- linux-headers-

That’s it!

Some more info can be found here (some stuff I took from there):

Thu, 29 Jan 2009 00:00:00 +0100 <![CDATA[Welcome]]> Welcome

Hi, welcome to my blog! I’m planning to put some technical stuff here in the future. Nerd-o-rama!

Thu, 29 Jan 2009 00:00:00 +0100