2011-12-14

debian pxe boot installation with dhcp (dnsmasq) running on an openwrt router

background

I recently installed debian stable/squeeze on an old-ish laptop. The machine was already running squeeze, but I wanted to upgrade the hard drive from the original 80GB to a larger 250GB drive. I had an extra 250GB drive that came with my new laptop, but I replaced it with a 500GB drive as soon as I bought it.

The old laptop hard drive had an existing windows XP NTFS partition that I wanted to copy over to the 250GB drive before I installed debian. I didn't want to copy over the debian install from the 80GB drive. The fresh install on the old laptop was intended as a replacement machine for my wife's even-older laptop, which has started freezing frequently due to what I believe to be a failing motherboard. My wife doesn't want any of the developer stuff that I had on my old debian system. Instead, I was going to copy her data over from her laptop. Confusing enough?

complications

I downloaded a x86 debian squeeze net-install image and copied it to a USB pen drive, but I discovered that, incredibly, my old laptop BIOS would not boot from USB. Lame.

linux installation methods

I've installed various linux distros over the years. The most complicated one was probably a slackware installation onto a 486 laptop with no optical drive, USB, or ethernet. I started out by booting an installer bootstrap image from a 3.5" floppy disk and finished that installation by mounting the rest of the installation media over NFS using a Null-Printer parallel cable with PLIP networking.

I've also set up automated kickstart installs of centos guests on Xen servers, and I've installed SuSE servers located halfway across the USA remotely over an IPMI console. I've installed headless servers from standard boot media using a console provided by a serial null-modem cable. In addition, I've done the usual, simple installs from optical media or USB. However, I had never installed linux from a PXE boot.

Over the years, I've burned way too many linux installation CDRs that I've used once, put in a desk drawer, and then threw away a couple years later when they were obsolete by several versions. I hate wasting stuff, and I decided to finally get around to learning about PXE booting so I can stop throwing away CDRs after a single use. Caveat: I realize that, going forward, almost all computers will support booting from USB, and for 1-off installations, that's the easiest option when it is available. In any case, it was not available for my old laptop.

pxe boot setup

Preboot Execution Environment (PXE) booting uses DHCP to discover PXE boot images available on the network. On my home network, I use a wireless router that runs openwrt for DHCP. I also have a little fanless, low-power mini-itx x86 server that I use as a file/media server.

I wanted to configure my openwrt DHCP server to direct PXE clients to installation media on my mini-itx server (hostname == "pizza").

Here is what I had to do:

openwrt

Add this to /etc/config/dhcp:

#
#       Specify pxelinux.0 without a directory prefix
#       because we run tftpd in chroot (--secure) mode:
#
config boot
    option filename         'pxelinux.0'
    option serveraddress    '192.168.1.77'
    option servername       'pizza'

Restart dnsmasq:

/etc/init.d/dnsmasq restart

tftpd on pizza

I'm using openbsd-inetd as my inetd server and running tftpd from inetd since I do not need to run it all the time. I configured tftpd-hpa on a server named pizza on my LAN.

Install and prepare tftpd

sudo apt-get install tftpd-hpa
sudo mkdir -p /srv/tftp

Do not run as a standalone server:

/etc/init.d/tftpd-hpa stop
rm /etc/init.d/S03tftpd-hpa

Configure inetd to run tftpd. Add this to /etc/inetd.conf:

#
# We *might* want to change --timeout (default 900 or 15 minutes), which
# is the timeout before the server will run after a connection is received before
# it terminates.
#
# -s or --secure (chroot on startup)
#
# -u tftp is USER that the daemon will run as (default is nobody).
#       Installing the tftpd-hpa package creates a tftp user
#
tftp dgram udp wait root /usr/sbin/in.tftpd /usr/sbin/in.tftpd -u tftp --secure /srv/tftp

Restart inetd:

/etc/init.d/openbsd-inetd restart

Download the debian netboot image, and extract to /srv/tftp. We should see:

ls -1 /srv/tftp/

debian-installer
pxelinux.0
pxelinux.cfg
version.info

add options to boot SystemRescueCD with PXE

The debian installer netboot tarball variant does not include fdisk. I needed it. I also wanted to use ntfsclone to copy my NTFS partition. Therefore, I also downloaded SystemRescueCD to transfer my windows XP partition before proceeding with the debian installation.

NOTE: SystemRescueCD has become bloated. I've used it in the past, and years ago it used to be about 100MB. It now includes xorg and a bunch of GUI tools, and the size of the image is over 300MB. This is fine when booting from physical storage (USB drive or optical media), but it's slow to transfer an image this size over a LAN. I used SystemRescueCD to transfer the windows XP partition from my old hard drive to the new one during my installation, but next time I'll try the PLD rescue cd instead.

I downloaded it, copied the iso to my server, mounted it as a loopback, and copied the contents to /srv/tftp/system-rescue-cd/system-rescue-cd-2.4.0/:

### Do all this stuff as root
mkdir -p /mnt/tmp
mkdir -p /srv/tftp/system-rescue-cd/system-rescue-cd-2.4.0
cd /srv/tftp/system-rescue-cd
ln -s system-rescue-cd-2.4.0 current
cd current
mount -o loop -tiso9660 /dev/shm/systemrescuecd-x86-2.4.0.iso /mnt/tmp
cp -a /mnt/tmp/* .

Then I made an entry for systemrescuecd in my PXE boot configuration, which I put in /srv/tftp/sysrescue32.cfg:

label sysrescue32
        menu label ^sysrescue32
        kernel system-rescue-cd/current/isolinux/rescuecd
        append vga=788 initrd=system-rescue-cd/current/isolinux/initram.igz

Then I added a line for that config file to /srv/tftp/pxelinux.cfg/default:

# D-I config version 2.0
include debian-installer/i386/boot-screens/menu.cfg

### This is the line I added to the default config:
include sysrescue32.cfg

default debian-installer/i386/boot-screens/vesamenu.c32
prompt 0
timeout 0

imaging and installation

I'm not going to cover the actual debian installation in detail. Once I had openwrt's dnsmasq DHCP set up to point to the tftpd running via inetd on pizza, all I had to do to PXE boot was hit F12 when I booted the old laptop to select PXE as the boot option.

Then I was greeted with a debian splash screen, from which I could select either one of the debian boot options or my sysrescue32 SystemRescueCD boot option.

To complete my installation, I did (roughly):

  • booted into SystemRescueCD, mounted my old hard drive with a USB enclosure,
  • created a single NTFS partition on the new drive with fdisk
  • imaged the old NTFS partition over the new partition with ntfsclone
  • created a single VFAT (FAT32 LBA type 0x0C) partition with fdisk to use for shared data between windows XP and linux
  • rebooted into the debian installer and installed debian (installing GRUB2 to the MBR)

2011-12-13

readthedocs.org

I just discovered readthedocs.org today, and it's awesome! The organization behind the site will build and host documentation for any open source project. It works with any projects that use sphinx/reStructuredText for their documentation.

I'm a fan of reST (reStructuredText). Its semantic processing of whitespace can be confusing when you're first starting out, and the format used for tables can be cumbersome if your tables are complex, but you get used to it.

I write these blog posts in reST, I write notes in reST, and recently I started using reST + sphinx to document an open source project.

I considered using github's wiki feature for project documentation, but after reading about it, I decided not to use it. As of this writing, it does not support auto-building documentation in HTML format from markup formats defined in your main project source tree. You can use git for your project wiki/docs, but github will create a separate git repo for the documentation.

With sphinx, you can take your reST sources and build documentation in multiple formats. readthedocs.org supports html, epub, and pdf out of the box!

readthedocs.org uses a build system that scans your project source tree to find the conf.py at the root of your sphinx documentation. My project had the sphinx docs rooted in src/site/sphinx, with the conf.py in the source subdirectory, and ReadTheDocs was able to find this and build my documentation without requiring me to specify the paths. All I had to do was:

  • sign up for an account on readthedocs.org
  • configure a new project on ReadTheDocs and specify the read-only-access url of my github project in the ReadTheDocs project configuration form.

And that was it! ReadTheDocs checked out my project source, built the documentation in html format, and published it here.

If you customize your sphinx layout, you have to contact the ReadTheDocs team to whitelist your custom configuration (this may be automated in the future). Otherwise, ReadTheDocs will build your documentation with a some default sphinx settings.

ReadTheDocs also provides a unique web service endpoint that you can call to rebuild your documentation. Github provides a custom post-commit hook for ReadTheDocs that can be configured on your github project page by navigating to:

admin -> service hooks -> ReadTheDocs

Once you do that, your documentation will be rebuilt automatically every time you push code to master. Otherwise, by default, the documentation will be rebuilt nightly.

2011-12-06

anyremote j2me client

I purchased a couple low-cost, used j2me-capable phones on eBay recently for a j2me development project. While watching a movie with my wife on our desktop computer the other day (we do not have a TV) and getting up repeatedly from the couch to walk over to the desktop to adjust the volume, I got the idea to write a simple j2me MIDlet that could use a phone's bluetooth interface to function as a remote control for the desktop.

It turns out that there is already an open source project for that called anyRemote. There are packages for it in debian stable:

1
sudo apt-get install anyremote

I know that one of my phones, a Nokia 5130-c2 with T-Mobile firmware, will throw a SecurityException if an unsigned MIDlet tries to access any APIs that require permissions (including bluetooth). The GNU autotools-based build for anyremote-j2me-client does not include options to sign your MIDlet. Since I already wrote a portable ant-based build system for another j2me MIDlet that includes a step to sign a jad and also ensures that jar MANIFEST and jad metadata are consistent, I ported my build scripts to the anyremote j2me client.

I published my build of the anyremote-j2me-client on github.

If you will be running the anyremote j2me client on a Nokia and need to sign your MIDlet, check out my previous blog post on installing self-signed code-signing certificates on nokia s40 handsets.

It took a little trial and error to get the client working with VLC. I'm pasting my notes below.

vlc with anyremote

I tried the ganyremote gtk client for anyremote, but I didn't have any luck with it. Having said that, I spent no more than a minute trying to get it to work with VLC, and I did not RTFM at all.

I figured out how to use the console-based anyremote server after a quick scan of the manpage, and that's what I document below.

First we need to configure VLC to play a media file, and run an embedded HTTP server on host:port localhost:8080 to accept commands from remote clients:

1
vlc -I http --http-host localhost:8080 mymovie.avi

Then we need to configure the anyremote server to listen on our bluetooth interface, using a configuration customized for remote VLC control. In the following example, the anyremote server listens on bluetooth channel 19:

1
anyremote -s bluetooth:19 -f /usr/share/anyremote/cfg-data/Server-mode/vlc.cfg

To find a list of cfg files installed with anyremote, run:

1
dpkg -L anyremote-data |grep cfg

Down the road, I will play around with customizing the config file and store my modified file in some subdirectory of $HOME.

As root, we need to make our bluetooth adapter visible to external bluetooth client scans:

1
sudo hciconfig hci0 piscan

Once the anyremote server is running and our hci0 interface allows remote clients to scan the channels, the anyremote-j2me-client can search for peers, find the anyremote server, and then connect to the service.

Then the remote control interface will be launched on the client, and we can pause, stop, fast forward, rewind, and adjust volume.

anyremote-j2me-client build for nokia 5130c2

My Nokia 5130 has a 240x320 resolution. The default vlc configuration for anyremote uses 4-rows of buttons. I found that the 48-pixel icon set is the best size for the nokia screen. I'm guessing that, for any given j2me device, you should calculate:

icon_size_max = vertical_resolution / 6

And then choose the larges available icon size that is < icon_size_max. For me that is 48-pixels, which means that I built my anyremote-j2me-client using:

1
ant -Dicon.size=48 -Dsign.app=true clean package

2011-12-02

installing code signing certificates on j2me phones

Recently, I got a couple j2me-capable phones on ebay as test devices for some development work that I'm doing for an NGO based in West Africa. I'm developing a literacy-education tool that is intended to be deployed on low-cost j2me devices because this is the technology platform with the greatest market penetration in the area. Most people in the region do not have laptops, desktops, OLPCs, tablets, android phones or iPhones. Many people do have low-cost j2me-capable phones manufactured by nokia, samsung, LG, etc.

To test my j2me code on real hardware (not just emulators), I purchased a couple second-hand mobile phones on eBay.

I do not use this type of device day to day. I have a much fancier android phone that operates on a CDMA network. My test devices are gsm.

One is a samsung C3050 and the other is a nokia 5130 XM. Both are relatively cheap devices. My C3050 isn't totally locked-down/crippled because it does not have a T-Mobile firmware. The guy who sold it to me on eBay shipped it from NY, and he left an old T-Mobile USA SIM card in the phone, but the firmware version code indicates that it was originally sold by "China Mobile Communications Corporation".

The nokia came with a T-Mobile USA firmware. On nokia s40 phones (and maybe other j2me phones), T-Mobile modifies the core manufacturer firmwares to disallow running apps that would otherwise run in the "trusted third party" j2me security domain. The phone contains Thawte and VeriSign root x509 certificates, but T-Mo does not allow you to run apps signed with code-signing certs. I learned this from developer.nokia.com.

In any case, (many? some?) nokia s40 phones do not allow you to install your own certificates for code signing. If you do a web search, you will find several other blogs lamenting the developer-unfriendly "security model" (in scare quotes because it has more to do with securing revenue than with device security) on j2me mobile devices.

It took way too much binary diffing, staring at hex dumps, and trial + error, but I reverse engineered enough of the file format of nokia's internal binary cert DB file to figure out how to install a self signed cert that runs in what I believe is the operator protection domain. The partial analysis of the nokia ext_info.sys file on this page by Thomas Zell helped a lot.

I created a github project called dustbowl to alter ext_info.sys files.

I also posted this analysis of the ext_info.sys file format.

Here are some really great tools that enabled my analysis:
gammu:
This is a fantastic project that implements the nokia FBUS protocol to support accessing the filesystem on supported nokia phones.
vbindiff:
This is a nice, lightweight console-based binary diff program
hexdump:
a good tool for a quick traditional hex-editor formatted view of a binary file.

update

Since I wrote this, I found an open source project called nokicert that also supports installing code-signing certificates on nokia s40 phones. I was able to build and run it, but I haven't tried to use it to install certs on my own phone. I confirmed that the feature to read the cert DB on my phone works fine.

Nokicert installs certs to your phone's auth certificate DB not to the user certificate DB. On my firmware/device (T-Mobile USA/nokia 5130c2), it is sufficient to install certs to the user certificate DB, and doing so incurs less risk of possibly bricking your phone.

I reached out to Francois Kooman, the developer of nokicert, and he graciously shared his notes on reverse engineering the security model of nokia phones. He managed to figure out several things about the security model that I had not figured out from my tinkering.

2011-09-19

First Look at Querydsl with JPA 2

Today I integrated integrated Querydsl into a java webapp that I recently started coding.

The project was already using JPA 2 (Hibernate). I used JPQL to implement an initial set of finder methods for some simple use-cases, but I decided to explore alternatives to JPQL when I reached a use-case that required me to build a query dynamically based on a search form that contains about half a dozen search fields.

In the past, before JPA 2, I used to use Hibernate's proprietary criteria API in these cases, and my search methods would dynamically build a criteria with code like this:

1
2
3
4
5
6
7
8
public List<User> findAll(UserSearchForm userSearchForm) {
    Criteria criteria = getSession().createCriteria(User.class);
    if( userSearchForm.getLastName() != null ) {
        criteria.add( Restrictions.eq("lastName", userSearchForm.getLastName()) );
    }
    // ... more conditional checks + more restrictions
    return criteria.list();
}

JPA 2 has a criteria API, and I spent a little time today reading documentation to learn the new API. After reading it over for a while, I concluded that I don't think the JPA 2 criteria API is productive for human consumption.

If I were to implement the above sample code with the JPA 2 criteria API, I believe it would read something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
// NOTE: I did not test this code. This is an example typed into my text editor
// freehand.
public List<User> findAll(UserSearchForm userSearchForm) {
    CriteriaBuilder cb = em().getCriteriaBuilder;           // CriteriaBuilder: class #1
    CriteriaQuery<User> cq = cb.createQuery(User.class);    // CriteriaQuery: class #2
    Root<User> user = cq.from(User.class);                  // Root: class #3

    List<Predicate> allCriteria = new ArrayList<Predicate>();   // Predicate: class #4
    if (userSearchForm.getLastName() != null) {
        // ParameterExpression: class #5
        ParameterExpression<String> px = cb.parameter(String.class, "lastName");
        allCriteria.add(cb.equal(user.get("lastName"), px));
    }
    // ... more

    // no error checking for 0 case
    if(allCriteria.size() == 1) {
        cq.where(allCriteria.get(0));
    } else {
        cq.where(cb.and( allCriteria.toArray(new Predicate[allCriteria.size()])));
    }

    TypedQuery<User> q = em.createQuery(cq);                // TypedQuery: class #7
    if( userSearchForm.getLastName() != null )
        q.setParameter("lastName", userSearchForm.getLastName());
    // ... more
    return q.getResultList();
}

The JPA 2 criteria API also supports using the canonical metamodel to create criteria queries with type-safe component expressions. I'm not going to provide an example here because the major classes used by the typesafe API are the same (CriteriaQuery, Root, CriteriaBuilder, etc), and because examples are easy enough to find with a simple web search.

While the API is very flexible, to me the component-classes feel like AST node classes designed to be written and read by a machine interpreter rather than a human. IMHO, it fails the StringBuilder litmus test. If I rewrote it to dynamically build a JPQL query using a StringBuilder, I believe it would be shorter and more readable. When APIs designed to build expressions require the use of many classes and verbose statements even for simple cases, the resultant code is hard to read. Code that is hard to read is hard to debug, and code that is hard to debug is more likely to contain bugs (it also takes longer to write, longer to test, etc.) Basically, this API is screaming for a human-friendly DSL facade.

Enter Querydsl. The mysema blog alread has a nice comparison of JPA 2 Criteria and Querydsl queries, but I'll briefly write an example of how the above code would look in Querydsl:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
// again, this is not tested. let me know if you spot an error
public List<User> findAll(UserSearchForm userSearchForm) {
    QUser user = QUser.user;
    JPQLQuery query = queryFrom(user);  // queryFrom is just new JPAQuery(em()).from(user)
    if (userSearchForm.getLastName() != null) {
        query.where(user.lastName.eq(userSearchForm.getLastName()));
    }
    // ... apply more restrictions conditionally
    return query.list(user);
}

Note that, while Querydsl uses a method-chaining fluent interface, most (all?) methods that can be invoked on Querydsl's JPQLQuery will mutate the query object (i.e., you do not need to write query = query.where(predicateA); query = query.where(predicateB); ... when building a query with multiple statements) . So we can chain together all the method calls into a compound statement, or we can build a query object with multiple statements. This option gives us terse, simple code for simple cases while still supporting more complex cases that require building a query dynamically based on a series of conditions.

Well, this blog post has more to do with my reaction toward the JPA 2 Criteria API than it has to do with Querydsl. Tomorrow, I'll write a post about some gotchas that I ran into with Querydsl + JPA.

2011-05-23

gnu screen xwindows clipboard integration

Part of the reason that I'm writing this blog post is to make sure I properly configured sourcecode syntax highlighting for my blog.

I created a project with utilities for authoring blog posts in reStructuredText syntax so I can write my blog articles in reStructuredText syntax using vim, preview as html with a simple vim command, and translate to html for publication.

Not too long ago, I tinkered with a very simple program to integrate the gnu screen buffer and xwindows CLIPBOARD. I ended up rewriting it in lua, perl and c as an experiment to see the relative performance differences. I pasted the 3 versions below to verify that syntax highlighting looks good with all 3 languages.

There were other examples of screen buffer <-> xwindows clipboard integration on the net, but none of them worked for me. To get it working, I had to add a 10ms sleep after copying /tmp/screen_exchange to my xwindows clipboard with xsel -bi. I'm not sure why the sleep is necessary, but I could not get it to work consistently without at least a 10ms sleep. Eventually I will peek into the screen and xsel code to better understand the apparent race condition, but for now working around the problem with a 10ms sleep is fine.

So now I can enter Copy Mode in screen, mark the start of my selection as usual with the space bar, and press . to close the selection. The final . exits Copy Mode and copies the screen buffer to my xwindows clipboard with a single keypress.

For reference, here is my ~/.screenrc:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
startup_message off
hardstatus alwayslastline

### Support for "tabs" in the screen status line:
hardstatus string '%{= kG}[ %H ][  %{= kw}%-w%{= bk}%n*%t%{= kw}%+w %= %{g}][%{B} %{W}%c %{g}]'
shelltitle " "

### In copy mode, map '.' to copy the selection to the Xwindows clipboard:
# explanation:
#
#   stuff ' '               -- this enters a space character in your terminal,
#                              effectively ending Copy Mode and putting your
#                              selection in the screen paste buffer
#
#   writebuf                -- Writes the screen paste buffer out to a file
#                              (default is /tmp/screen-exchange )
#
#   exec 'screen_buff_copy' -- executes screen_buff_copy, which is a program
#                              that will read /tmp/screen_buff_copy and write
#                              it to the Xwindows CLIPBOARD with xsel -bi
#
bindkey -m . eval "stuff ' '" "writebuf" "exec '/home/greg/scripts/screen/screen_buff_copy'"

During my initial troubleshooting, I decided to put the screen_exchange -> xsel step into an external screen_buff_copy program so I could invoke the program outside of screen to minimize the moving parts. Once I discovered the sleep workaround, I decided to try to make my screen_buff_copy program as lightweight as possible because it executes every time I want to copy text to my clipboard. I settled on lua, perl and c for my implementation trials because they have less startup overhead for short programs than, e.g., java or python.

I first wrote the program in lua:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#!/usr/bin/lua

-- NOTE: the socket does not ship as part of the lua std libs
-- You have to install it separately.
require('io')
require('socket')

pipe = io.popen("/usr/bin/xsel -bi", "w")
fp = io.open("/tmp/screen-exchange")

-- write file contents to pipe
pipe:write(fp:read("*all"))

-- close pipe and file pointer
pipe:close()
fp:close()

-- There is some race condition in the entire screen slurping/exec/whatever
-- process. Without a sleep, it does not work consistently. There is also a lag
-- of about 1 second in the stuff, writebuf and exec steps before this script is
-- properly executed and the time that the clipboard is actually set

-- sleep for 0.01 seconds:
socket.select(nil, nil, 0.01)

I was not thrilled that I had to load a library that is not part of the std libs (socket), and I quickly rewrote it in perl, curious to see perl's relative performance.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
#!/usr/bin/perl

#
#   This is very very marginally more lightweight than the lua version,
#   and one nice thing about it is that it uses all perl builtins without
#   requiring external libs (in lua's case, the external socket lib is required
#   for sleep)
#

open(F, "/tmp/screen-exchange");
open(P, "| xsel -bi");

print P <F>;

close F;
close P;

### Sleep (needed due to weird race condition)
my $sleep_seconds = 0.01;
select(undef, undef, undef, $sleep_seconds );

And then I decided to write it in C just to see if the overhead was perceptibly lower. Interestingly, it really was not. The C version is not faster than the perl version, but it requires about twice as many lines of code.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
#include <stdio.h>
#include <unistd.h> /* for usleep */

/*************************************************************
*
*       screen_buff_copy.c
*
*       It is totally unnecessary to do this in C. The performance
*       is almost indistinguishable from the perl version.
*
*************************************************************/

/* Sleep for 10 milliseconds (seems like the min needed for reliability) */
#define MILLISECOND_IN_USEC     1000
#define USLEEP_TIME             MILLISECOND_IN_USEC * 10

#define BUFF_SIZE               80

int main(int argc, char **argv) {
    FILE    *pipe;
    FILE    *fp;
    size_t  num_read; /* number of items read */

    char    buff[BUFF_SIZE];

    pipe = popen("/usr/bin/xsel -bi", "w");
    fp = fopen("/tmp/screen-exchange", "r");

    while( (num_read = fread(buff, sizeof(char), sizeof(buff), fp)) > 0 ) {
        fwrite(buff, sizeof(char), num_read, pipe);
    }

    fflush(pipe);
    fclose(fp);
    pclose(pipe);

    /* Yes, we need to sleep */
    usleep(USLEEP_TIME);

    return(0);
}

A simple benchmark program (another opportunity for syntax higlighting another lang):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
#!/bin/sh

ITER=100
sbc=screen_buff_copy

for p in ./$sbc ./${sbc}.pl ./${sbc}.lua; do
    echo "Timing $ITER iterations of ${p}: "
    time for n in `seq $ITER`; do $p; done
    echo
done

And the results -- perl and c are basically identical. Most of the time is spent in the usleep anyway. Perl 5's super-low startup latency for short scripts is amazing (ditto with lua, although lua takes a tiny hit here when it loads its external socket library).

Timing 100 iterations
  c perl lua
real 0m2.304s 0m2.263s 0m2.596s
user 0m0.608s 0m0.512s 0m0.692s
sys 0m0.444s 0m0.332s 0m0.432s