## Friday, December 31, 2010

### Unselectable stuff in Inkscape

I'm putting this here just in case it happens to someone else and they are freaking out. If you are editing an SVG file in Inkscape and something won't select if clicked on, it seems like it is locked, uneditable, read only, unmodifiable, or immutable (all search terms I was trying): then it might be that some of your image is in separate, locked layers. Layers can be locked, making elements unchangeable until you unlock the layer in the layers dialog.

Hope this helps.

## Friday, December 10, 2010

### Problems with cooling the MacBook Pro (1,2)

I have a first generation MacBook Pro which, after living in shame with OS X for several years, I finally bit the bullet and installed Ubuntu on it. And it was certainly biting a bullet; Apple computers are some of the least supported computers on the market when it comes to Ubuntu and the Linux kernel in general. I imagine this is strongly linked with the closed nature of their design. Despite this, the gains I have seen in capability of the machine far outweigh the annoyances of the setup, the reduced battery life, and when using 9.04, the many graphics errors that made compiz and anything with OpenGL unusable (the new ATI drivers are much better, at least for me).

One of the most annoying problems, however, is that (at least) the first generation MacBook Pros run very hot with Ubuntu. Largely this seems to be due to a bug in the fan speed control. It does increase the fan speed with temperature, but it seems that the temperature of the CPU needs to reach upwards of 100 degrees C before you will notice the fans kicking in. At 100 degrees C the outside case (aluminum and often in contact with your skin) may reach an uncomfortable temperature. I suspect that this might be the result of the fan control module reading its temperature data from the wrong sensor, i.e. scaling the fan based on the case temperature instead of the CPU, but let's ignore the root of the problem and consider how to fix it.

Several people have come up with methods of controlling the temperature, like this. I wrote my own because I thought this was too complicated, but mine has grown in complexity with time. I just have this running in the background as root.

#!/bin/bash

let 'speed=1000'

while true
do
temp=cat /sys/devices/platform/applesmc.768/temp3_input
let 'new_speed=(1000 + (5000*((temp-35000)/(100*(60-30))))/10)'
if [ $new_speed -lt$speed ]
then
# Go down in speed quickly
let 'speed=new_speed'
fi
# Average to provide some damping
let 'speed=(speed+new_speed)/2'
if [ $speed -gt 6000 ]; then speed=6000; fi if [$speed -lt 1000 ]; then speed=1000; fi
echo $speed > /sys/devices/platform/applesmc.768/fan1_min echo$speed > /sys/devices/platform/applesmc.768/fan2_min
sleep 2
done 

But even this wasn't enough to keep the temperature under control. I have read in many places that the maximum fan speed for the MacBook Pro (1,2) is 6000 RPM (although it seems to go up to 7000 RPM). Even when the fan speed is pegged at 6000 RPM, one process at 100% CPU will raise the temperature up to 100 degrees C (already too hot for laptop work), and anything running on the other core will often times raise the temperature enough to cause the system to shutdown from overheating. This is a big problem: it seems that the fans on the MacBook Pro (1,2) are not able to cool the CPU appropriately at maximum load.

Today I finally found a way to keep the temperature under control: by changing the CPU scaling behavior to powersave mode. This basically locks each core to its minimal clock frequency, or 1 GHz for me. Interestingly, this doesn't seem to extend my battery life at all. What it does do is hold the CPU temperature at below 60 degrees C when the above script is despite the system load. Of course, CPU bound programs run at half the speed.

Never mind the matter of why Ubuntu's built in thermal throttling isn't kicking in, setting the powersave governor will allow us to script our own thermal throttling. I plan, in the near future, to hack together a solution using cpufreq-selector that will switch the CPU scaling governor when the system gets too hot or if the system is running on battery power (as stated earlier, this isn't for extended battery life. It's just that when I am on battery power it is more likely that the computer is sitting on my lap or I'm in a meeting and a noisy fan would bother others). This sort of thing makes me wonder if Apple used thermal throttling when they configured OS X for the first gen MBPros. The computer certainly ran hot in OS X, but it never crashed due to overheating. The current CPU frequency was not readily apparent from with OS X (or at least I didn't see it). I wonder if all those times I was running at maximum CPU, I was actually running at 1 GHz or 1.33 GHz so that the system wouldn't shutdown. Makes me pretty happy that my place of work went with my suggestion to buy the lowest clock speed computer. I could probably test this by booting into OS X and running a benchmark and comparing to the result under Ubuntu.

I did a little research and it seems that the Core2 Duos dropped support for ACPI Thermal Zone, which is responsible for thermal throttling. The first gen MBP has a Core Duo, so maybe it is similar. Turns out you can switch between thermal throttling CPU states by writing the desired state (say T1) to /proc/acpi/processor/CPU0/throttling. Even at the most limiting throttle states, the CPU still runs hotter than running with the powersave governor at T0. That doesn't make a lot of sense.

## Friday, October 29, 2010

### LGDC Accounce

I have wanted to participate in the Lisp Game Dev Competitions for a while now, but it always seemed that work made it an impossibility. Things are no different this time around, in fact I am probably busier, but I figure that if I want it to happen I am just going to have to make the time.  Without further ado, I announce my entry into the October LGDC.

## 1 Asteroid Jumper

Due to the proximity of Halloween, it seems like a good idea to make a Halloween inspired game, a real fright fest.  I, however, have not followed that route at all.  My idea for a game involves hoping from asteroid to asteroid either battling or racing opponents.  This is an action game where the arena of play is constantly in flux; asteroids shift, collide, and break apart due to the actions of the players.

This will be a 2D game where the basic gameplay will be running around the perimeter of asteroids and jumping from asteroid to asteroid.  I don't want this to be a shooter, per se.  I want something that mixes together the feel of Asteroids, Worms, an overhead view game, and a side scroller.

### 1.1 Weapons, etc

The items or weapons that will be available is really unclear at this time.  In my imagination, I think a few things that might be fun to have: grappling hooks, jet packs, shields, explosives, guns, and bouncing projectiles.

### 1.2 The Asteroids

The asteroids will have real physics.  They will have momentum (linear and rotational).  When they collide they will transfer that momentum between each other.

I might want to include fault lines in the asteroids.  These cracks will define how the asteroid will prefer to fracture.

### 1.3 Technical Stuff

Okay, here's the plan: Since I am very familiar and fond of Common Lisp, I will be using it.  I always aim for cross implementation code, but for the time I will be developing for SBCL.  I am going to use CL-OpenGL for the graphics and the Squirl implementation of Chipmunk for physics and collision detection.  In my experience, CL-OpenGL works most places and Squirl works with SBCL and CLisp, but has issues elsewhere.  This is the route of least resistance as it will involve modifying the demos contained in the Squirl package.

I am still unsure how to get sound effects and/or music into my Lisp programs other than spawning a shell process to play an audio file.  I hope to look into OpenAL for sound as it should be cross platform.

A goal is to have this be multi-player, particularly over TCP, but time is short and I have never done this, so I'm not sure I will be able to accomplish this.  If I do attempt this I would try to use ZeroMQ, which seems to make this easy to do.

If you are asking yourself, why not use Lispbuilder, well I have always had trouble getting it up and running.

## Sunday, October 3, 2010

Many would be surprised to hear that several common GNU/Linux programs don't handle symlinks properly.  By that, of course I mean that they don't handle them the way I would want them to, but close enough.  For instance, if you want to copy a directory from one server to another, the command scp -r source-dir target-dir looks very attractive. Unfortunately scp follows symlinks, meaning instead of copying a link to some other part of the file system, it instead copies that other part of the file system.  For a heavily symlinked directory this can be disastrous.

The correct and fool proof way to grab a portion of a file system from a server is to use tar.  Don't worry, this doesn't mean you have to actually create a tar file, you can use tar to pipe the output over ssh and untar it on the other side.

tar -c some-files some-dirs \
| ssh -C my-server "tar -C path/to/extract/root -x"

ssh -C my-server \
"tar -C path/to/archive/root -c some-files some-dirs" | tar -x

The -C switch to tar tells it to change directories prior to performing the operation.  The -C switch to ssh tells it to compress the traffic with gzip like compression.  You can even use a better compression if you have a slower connection to the server or a pay by the bit plan, by including lzma or p7zip in the pipe, or just passing a -j switch to both tar commands.  By the way, p7zip also treats symlinks badly and you need to protect any hierarchy with a tar archive.

In case you are wondering why scp defaults to bad behavior, well all file systems aren't created equal.  Since you are copying to a server and who knows what file system they have (for instance, it could be FAT), you might not be able to create symlinks there.  So it is an alright decision to only copy files and not links to files.  If you only deal with, hmmm, how to put it, modern file systems, this sure seems like incorrect behavior.  Maybe someday this will change, but in the mean time, the tar method works great and has been the method of choice since tar, pipes, and networks existed.

But wait, there's more.  Even if you don't have symlinks, piping a tar archive over ssh might be a good idea.  Since scp operates on individual files, it incurs an overhead on each one.  If you have many small files you want to transfer, small enough that the actual transfer time is almost insignificant, this overhead can become quite costly.  In these cases the tar method will be faster.

smithzv@ciabatta:~$ssh scandal "ls -R kappa-slices-3d | wc" 3993 3958 117715 smithzv@ciabatta:~$ ssh scandal "du -sh kappa-slices-3d"
36M     kappa-slices-3d

smithzv@ciabatta:~$time scp -qr scandal:./kappa-slices-3d dat/ real 0m8.004s user 0m1.152s sys 0m1.184s smithzv@ciabatta:~$ time ssh scandal "tar -c kappa-slices-3d" \
| tar -x -C ~/dat/

real    0m2.442s
user    0m0.824s
sys     0m0.728s

This directory on our scandal cluster has 4000 small files in it which total up to 36 MB.  Performing the piped tar method takes about a third the time of the recursive scp copy.  Also, I should point out that the scp process will, as far as I know, at best be as fast as the taring procedure.  Of course, note that we didn't use compression here as this is a transfer of already compressed files over a fast connection and compression just slows both commands down.  If you ever need to backup your computer over a your home LAN so you can reinstall an OS or something, this is a lifesaver (or at least a time saver).

So, piping a tar archive over ssh is a great tool.  That being said, there is a program that does so much more and might be a better choice as long as it is installed on both systems; it's called rsync. rsync follows symlinks just like scp by default (for the same reasons), but it has a switch, -a for archive mode, that allows it to perform the symlink preserving behavior as seen above.  rsync has other benefits over just an ssh or scp copy (like incremental updates: i.e. only transmitting data that has changed) and really should be preferred in most cases if it is an option, but you have to read the man page first or it will bite you, especially if you have heavily internalized the way cp and scp work.

## Thursday, September 16, 2010

### Nasty C Behavior

This will probably be part of a continuing series of rants about how the C language has nipped me. This time it is due to not properly prototyping functions. If you want to play it safe with C, it is a bare minimum that all functions have proper prototypes before their first usage.

I have been moving some C code from my 32 bit development computer to our 64 bit cluster. The way C (or maybe just gcc) deals with functions that don't have prototypes is to assume that the return value is an int. The tricky part is that on some systems an int and a pointer are very likely the same size. If this is the case, the C picture of data is just bits, interpret them however you like, means you will probably get a warning but the program will run fine.

But the relationship between the width of a pointer and an int is in no way guaranteed. When I moved the code to the 64 bit cluster, the implicit prototype meant the compiler expected the function to return an int, but it instead returned a pointer, and on the cluster a pointer is 8 bytes wide while an integer is 4 bytes wide. This led to a problem where the code worked fine in development, but when moved to production it failed mysteriously.

All of this is further aggravated by the fact that I didn't write the buggy code. My boss wrapped some code from Numerical Recipes and neglected to include the header files. And it would have been caught years ago, but gcc is smart enough to fix this bug sometimes. It is not entirely clear when this will bite you and when it won't, so my new plan is to prototype everything I use and if something fails mysteriously, check the prototypes first.