Friday, December 31, 2010
Hope this helps.
Friday, December 10, 2010
I have a first generation MacBook Pro which, after living in shame with OS X for several years, I finally bit the bullet and installed Ubuntu on it. And it was certainly biting a bullet; Apple computers are some of the least supported computers on the market when it comes to Ubuntu and the Linux kernel in general. I imagine this is strongly linked with the closed nature of their design. Despite this, the gains I have seen in capability of the machine far outweigh the annoyances of the setup, the reduced battery life, and when using 9.04, the many graphics errors that made compiz and anything with OpenGL unusable (the new ATI drivers are much better, at least for me).
One of the most annoying problems, however, is that (at least) the first generation MacBook Pros run very hot with Ubuntu. Largely this seems to be due to a bug in the fan speed control. It does increase the fan speed with temperature, but it seems that the temperature of the CPU needs to reach upwards of 100 degrees C before you will notice the fans kicking in. At 100 degrees C the outside case (aluminum and often in contact with your skin) may reach an uncomfortable temperature. I suspect that this might be the result of the fan control module reading its temperature data from the wrong sensor, i.e. scaling the fan based on the case temperature instead of the CPU, but let's ignore the root of the problem and consider how to fix it.
Several people have come up with methods of controlling the temperature, like this. I wrote my own because I thought this was too complicated, but mine has grown in complexity with time. I just have this running in the background as root.
#!/bin/bash let 'speed=1000' while true do temp=`cat /sys/devices/platform/applesmc.768/temp3_input` let 'new_speed=(1000 + (5000*((temp-35000)/(100*(60-30))))/10)' if [ $new_speed -lt $speed ] then # Go down in speed quickly let 'speed=new_speed' fi # Average to provide some damping let 'speed=(speed+new_speed)/2' if [ $speed -gt 6000 ]; then speed=6000; fi if [ $speed -lt 1000 ]; then speed=1000; fi echo $speed > /sys/devices/platform/applesmc.768/fan1_min echo $speed > /sys/devices/platform/applesmc.768/fan2_min sleep 2 done
But even this wasn't enough to keep the temperature under control. I have read in many places that the maximum fan speed for the MacBook Pro (1,2) is 6000 RPM (although it seems to go up to 7000 RPM). Even when the fan speed is pegged at 6000 RPM, one process at 100% CPU will raise the temperature up to 100 degrees C (already too hot for laptop work), and anything running on the other core will often times raise the temperature enough to cause the system to shutdown from overheating. This is a big problem: it seems that the fans on the MacBook Pro (1,2) are not able to cool the CPU appropriately at maximum load.
Today I finally found a way to keep the temperature under control: by changing the CPU scaling behavior to powersave mode. This basically locks each core to its minimal clock frequency, or 1 GHz for me. Interestingly, this doesn't seem to extend my battery life at all. What it does do is hold the CPU temperature at below 60 degrees C when the above script is despite the system load. Of course, CPU bound programs run at half the speed.
Never mind the matter of why Ubuntu's built in thermal throttling isn't kicking in, setting the powersave governor will allow us to script our own thermal throttling. I plan, in the near future, to hack together a solution using
cpufreq-selector that will switch the CPU scaling governor when the system gets too hot or if the system is running on battery power (as stated earlier, this isn't for extended battery life. It's just that when I am on battery power it is more likely that the computer is sitting on my lap or I'm in a meeting and a noisy fan would bother others). This sort of thing makes me wonder if Apple used thermal throttling when they configured OS X for the first gen MBPros. The computer certainly ran hot in OS X, but it never crashed due to overheating. The current CPU frequency was not readily apparent from with OS X (or at least I didn't see it). I wonder if all those times I was running at maximum CPU, I was actually running at 1 GHz or 1.33 GHz so that the system wouldn't shutdown. Makes me pretty happy that my place of work went with my suggestion to buy the lowest clock speed computer. I could probably test this by booting into OS X and running a benchmark and comparing to the result under Ubuntu.
I did a little research and it seems that the Core2 Duos dropped support for ACPI Thermal Zone, which is responsible for thermal throttling. The first gen MBP has a Core Duo, so maybe it is similar. Turns out you can switch between thermal throttling CPU states by writing the desired state (say
/proc/acpi/processor/CPU0/throttling. Even at the most limiting throttle states, the CPU still runs hotter than running with the powersave governor at
T0. That doesn't make a lot of sense.
Friday, October 29, 2010
I have wanted to participate in the Lisp Game Dev Competitions for a while now, but it always seemed that work made it an impossibility. Things are no different this time around, in fact I am probably busier, but I figure that if I want it to happen I am just going to have to make the time. Without further ado, I announce my entry into the October LGDC.
1 Asteroid Jumper
This will be a 2D game where the basic gameplay will be running around the perimeter of asteroids and jumping from asteroid to asteroid. I don't want this to be a shooter, per se. I want something that mixes together the feel of Asteroids, Worms, an overhead view game, and a side scroller.
1.1 Weapons, etc
1.2 The Asteroids
I might want to include fault lines in the asteroids. These cracks will define how the asteroid will prefer to fracture.
1.3 Technical Stuff
I am still unsure how to get sound effects and/or music into my Lisp programs other than spawning a shell process to play an audio file. I hope to look into OpenAL for sound as it should be cross platform.
A goal is to have this be multi-player, particularly over TCP, but time is short and I have never done this, so I'm not sure I will be able to accomplish this. If I do attempt this I would try to use ZeroMQ, which seems to make this easy to do.
If you are asking yourself, why not use Lispbuilder, well I have always had trouble getting it up and running.
Sunday, October 3, 2010
Many would be surprised to hear that several common GNU/Linux programs don't handle symlinks properly. By that, of course I mean that they don't handle them the way I would want them to, but close enough. For instance, if you want to copy a directory from one server to another, the command
scp -r source-dir target-dirlooks very attractive. Unfortunately
scpfollows symlinks, meaning instead of copying a link to some other part of the file system, it instead copies that other part of the file system. For a heavily symlinked directory this can be disastrous.
The correct and fool proof way to grab a portion of a file system from a server is to use
tar. Don't worry, this doesn't mean you have to actually create a
tarfile, you can use
tarto pipe the output over
sshand untar it on the other side.
tar -c some-files some-dirs \ | ssh -C my-server "tar -C path/to/extract/root -x"
If you want to download from a server…
ssh -C my-server \ "tar -C path/to/archive/root -c some-files some-dirs" | tar -x
tartells it to change directories prior to performing the operation. The
sshtells it to compress the traffic with gzip like compression. You can even use a better compression if you have a slower connection to the server or a pay by the bit plan, by including
p7zipin the pipe, or just passing a
-jswitch to both
tarcommands. By the way,
p7zipalso treats symlinks badly and you need to protect any hierarchy with a tar archive.
In case you are wondering why
scpdefaults to bad behavior, well all file systems aren't created equal. Since you are copying to a server and who knows what file system they have (for instance, it could be FAT), you might not be able to create symlinks there. So it is an alright decision to only copy files and not links to files. If you only deal with, hmmm, how to put it, modern file systems, this sure seems like incorrect behavior. Maybe someday this will change, but in the mean time, the tar method works great and has been the method of choice since
tar, pipes, and networks existed.
But wait, there's more. Even if you don't have symlinks, piping a tar archive over
sshmight be a good idea. Since
scpoperates on individual files, it incurs an overhead on each one. If you have many small files you want to transfer, small enough that the actual transfer time is almost insignificant, this overhead can become quite costly. In these cases the tar method will be faster.
smithzv@ciabatta:~$ ssh scandal "ls -R kappa-slices-3d | wc" 3993 3958 117715 smithzv@ciabatta:~$ ssh scandal "du -sh kappa-slices-3d" 36M kappa-slices-3d smithzv@ciabatta:~$ time scp -qr scandal:./kappa-slices-3d dat/ real 0m8.004s user 0m1.152s sys 0m1.184s smithzv@ciabatta:~$ time ssh scandal "tar -c kappa-slices-3d" \ | tar -x -C ~/dat/ real 0m2.442s user 0m0.824s sys 0m0.728s
This directory on our scandal cluster has 4000 small files in it which total up to 36 MB. Performing the piped tar method takes about a third the time of the recursive
scpcopy. Also, I should point out that the
scpprocess will, as far as I know, at best be as fast as the taring procedure. Of course, note that we didn't use compression here as this is a transfer of already compressed files over a fast connection and compression just slows both commands down. If you ever need to backup your computer over a your home LAN so you can reinstall an OS or something, this is a lifesaver (or at least a time saver).
So, piping a tar archive over ssh is a great tool. That being said, there is a program that does so much more and might be a better choice as long as it is installed on both systems; it's called
rsyncfollows symlinks just like
scpby default (for the same reasons), but it has a switch,
-afor archive mode, that allows it to perform the symlink preserving behavior as seen above.
rsynchas other benefits over just an
scpcopy (like incremental updates: i.e. only transmitting data that has changed) and really should be preferred in most cases if it is an option, but you have to read the man page first or it will bite you, especially if you have heavily internalized the way
Thursday, September 16, 2010
I have been moving some C code from my 32 bit development computer to our 64 bit cluster. The way C (or maybe just gcc) deals with functions that don't have prototypes is to assume that the return value is an int. The tricky part is that on some systems an int and a pointer are very likely the same size. If this is the case, the C picture of data is just bits, interpret them however you like, means you will probably get a warning but the program will run fine.
But the relationship between the width of a pointer and an int is in no way guaranteed. When I moved the code to the 64 bit cluster, the implicit prototype meant the compiler expected the function to return an int, but it instead returned a pointer, and on the cluster a pointer is 8 bytes wide while an integer is 4 bytes wide. This led to a problem where the code worked fine in development, but when moved to production it failed mysteriously.
All of this is further aggravated by the fact that I didn't write the buggy code. My boss wrapped some code from Numerical Recipes and neglected to include the header files. And it would have been caught years ago, but gcc is smart enough to fix this bug sometimes. It is not entirely clear when this will bite you and when it won't, so my new plan is to prototype everything I use and if something fails mysteriously, check the prototypes first.