Sat, 23 Dec 2006
I wanted to mount a jffs2 filesystem on my linux box (Ubuntu). It's actually the filesystem from my Zaurus 5500. My first attempt was:
# mount -t jffs2 -o loop initrd.bin r
It turns out that that is the wrong thing to do, and it results in the message:
mount: wrong fs type, bad option, bad superblock on /dev/loop/0, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so
and the following is in syslog:
Attempt to mount non-MTD device "/dev/loop/0" as JFFS2
It turns out that the correct incantation is something like:
Load the mtdram module to create a ramdisc of the correct size. The total_size and erase_size parameters are in KiB (1024 bytes), and you should try to be fairly accurate or make sure you have plenty of memory available. The filesystem I wanted to look at was 14680064 bytes, which is 14336 KiB.
# modprobe mtdram total_size=14336 erase_size=128
Check that worked OK:
# cat /proc/mtd dev: size erasesize name mtd0: 00e00000 00020000 "mtdram test device"
Then load the mtdblock module:
# modprobe mtdblock
Copy across the filesystem to the ramdisc:
# dd if=initrd.bin of=/dev/mtdblock0
Then do a loopback mount on the ramdisc:
# mount -t jffs2 /dev/mtdblock0 r
When you're done:
# umount r # modprobe -r mtdblock # modprobe -r mtdram
Tue, 14 Nov 2006
A while back I wanted to know how to start apache without having to go through the pass-phrase dialogue required for the SSL on the https server. Turns out it is a faq, and the next time I need to know I'll look here.
[/software/apache] permanent link
Tue, 03 Oct 2006
I still use Gallery to display my photos and have recently upgraded to version 2. The upgrade was generally smooth and I'm fairly happy with Gallery2. But for a while I've been unhappy with the software I use to take the photos from my camera and organise them prior to uploading them to gallery.
My requirements are fairly simple. I want to be able to take a bunch of photographs from somewhere, organise them into directories named after the date on which they were taken, and the photographs themselves to be named after the date and time they were taken. I also want the photographs to be automatically rotated as required, losslessly. Finally, I don't want any duplicates. In this form, I can easily upload the directories to gallery.
I was sure I would be able to find something to do that, but I couldn't. So I finally got fed up and wrote something myself. It's only a hundred lines or so, so I probably should have done it myself a long time ago.
Anyway, here it is. I'll use it for a while, iron out the kinks, and then, if it seems worthwhile, I'll package it up somehow.
Note that the error checking is minimal, so don't delete your originals until you are happy with the results. Oh, and you'll need certain programs to be installed. So it will probably only work on *nix. I have somewhat unimaginatively named the program autorotate.
#!/usr/bin/perl use strict; use warnings; my $Rotated = <~/g/pics/sorted>; mkdir $Rotated or die "Can't create $Rotated: $!" unless -d $Rotated; my %Rot = ( 2 => "-flip horizontal", 3 => "-rotate 180", 4 => "-flip vertical", 5 => "-transpose", 6 => "-rotate 90", 7 => "-transverse", 8 => "-rotate 270", ); sub date_name { my ($year, $month, $day, $hour, $min, $sec) = @_; my $datef = "%04d-%02d-%02d"; my $namef = "%s %02d:%02d:%02d.jpg"; my $date = sprintf $datef, $year, $month, $day; my $name = sprintf $namef, $date, $hour, $min, $sec; ($date, $name) } FILE: for my $pic (@ARGV) { unless ($pic =~ /\.jpe?g$/i) { print "ignoring $pic\n"; next; } my $exif = `exiftags -a '$pic'`; my ($date, $name, $tmp); for my $type (qw( Created Generated Digitized mtime )) { next if $date; if ($type eq "mtime") { my $mtime = (stat $pic)[9]; my ($sec, $min, $hour, $day, $month, $year) = localtime $mtime; ($date, $name) = date_name $year + 1900, $month +1, $day, $hour, $min, $sec; } else { next unless $exif =~ /Image $type: (\d{4}):(\d\d):(\d\d) (\d\d):(\d\d):(\d\d)/; ($date, $name) = date_name $1, $2, $3, $4, $5, $6; } $date = "" if $name eq "2005-01-01 00:00:00.jpg" || $name eq "0000-00-00 00:00:00.jpg"; } my $newdir = "$Rotated/$date"; mkdir $newdir or die "Can't create $newdir: $!" unless -d $newdir; my $rot = `jpegexiforient -n '$pic'`; my $trans = $Rot{$rot} || ""; print(($trans ? "rotating" : " copying"), " $pic to $name", ($trans ? " [$trans]" : ""), " "); $tmp = "$newdir/tmp_$name"; $name = "$newdir/$name"; if ($trans) { my $command = "jpegtran -copy all $trans '$pic' > '$tmp'"; system $command and die "Can't run: $command: $?"; $command = "jpegexiforient -1 '$tmp' > /dev/null"; system $command and die "Can't run: $command: $?"; } else { my $command = "cp -a '$pic' '$tmp'"; system $command and die "Can't run: $command: $?"; } while (-e $name) { system "cmp -s '$tmp' '$name'"; if (!$?) { print "- exists!\n"; unlink $tmp; next FILE; } no warnings "uninitialized"; $name =~ s/(-\d+)?\.jpg$/$1 - 1 . ".jpg"/e; print "- trying version ", -($1 -1), " "; } # print "- renaming => $name" if $name =~ /-\d+\.jpg$/; print "\n"; rename $tmp => $name or die "Can't rename $tmp => $name: $!"; chmod 0644, $name; }
[/software/gallery] permanent link
Sat, 16 Sep 2006
I decided retire my heavily tweaked debian installation on my laptop in favour of a nice friendly ubuntu installation with logins for all the family. Installing ubuntu was easy enough, rather impressively so in fact, but then it got to the stage of getting the wireless to work.
On my debian installation I had installed ndiswrapper, and I forgot about that until after I had overwritten it. So here's what I did, just in case I need to do it again for some reason.
I'm using a Linksys WPC54GS card. lspci tells me it is a Broadcom Corporation BCM4306 802.11b/g Wireless LAN Controller (rev 03) and lspci -n tells me that the PCI ID is 14e4:4320 (rev 03). The ndiswrapper list tells me to get the driver from ftp://ftp.linksys.com/pub/network/wpc54gs_driver_utility_v1.0.zip driver (it seems that kwiki turns ftp links into http) and from there I can extract the files lsbcmnds.inf and bcmwl5.sys.
Ubuntu 6.06.1 didn't come with ndiswrapper-utils installed, so that needs to be done, and then I can run ndiswrapper -i bcmwl5.sys. ndiswrapper -l confirms everything is OK. modprobe ndiswrapper loads the module, and dmesg confirms all has gone well.
But dmesg also showed problems: bcm43xx: Failed to switch to core 0 and bcm43xx: Error: Microcode "bcm43xx_microcode5.fw" not available or load failed. rmmod bcm43xx ndiswrapper and then module ndiswrapper fixed things, and dmesg showed more interesting stuff.
Adding blacklist bcm43xx to /etc/modprobe.d/blacklist fixes that permanently. And ndiswrapper -m adds alias wlan0 ndiswrapper to /etc/modprobe.d/ndiswrapper. Since ubuntu wants to use eth1 instead of wlan0 I just edited that file to add alias wlan0 eth1.
Then the essid and key can be set up in the networking tool, and it all works. Sometimes. For some reason I need to specify the key in hex - the ASCII string gives an incorrect value. Then I added /etc/init.d/networking restart to /etc/rc.local to make sure things come up correctly. Everything seems OK now.
Sat, 09 Sep 2006
I use debian or ubuntu on most of my systems and the main reason for that is package management that basically "just works", so I've never really felt the need to get proficient with it. But sometimes you need to do something just a little out of the ordinary.
One of those things is finding which package contains some random program you want to install, and there has to be a better way than searching the web for "debian package XYZ". It turns out there is, of course:
# aptitude install auto-apt # auto-apt update $ auto-apt search XYZ
This, and many more tricks can be found in the debian manual.
Mon, 07 Aug 2006
I use ion as my window manager and on a new Ubuntu installation I decided to use ion3 instead of ion2 as on all my other installations. I don't like the standard ion configuration which hijacks all my function keys, but that was easily fixed by changing /etc/default/ion3 to
META="Mod1+" ALTMETA="Mod4+"
This uses the otherwise useless "Windows" key for options which previously had no modifier.
Then, to set the terminal emulator I want,
# update-alternatives --config x-terminal-emulator
and select uxterm, an xterm that knows about UTF8.
Then, I like to use Control-Left and Control-Right to move between objects within frames. This is done by adding the following as ~/.ion3/default-session--0/cfg_user.lua
defbindings("WFrame", { bdoc("Switch to next/previous object within the frame."), kpress("Control+Right", "WFrame.switch_next(_)"), kpress("Control+Left", "WFrame.switch_prev(_)"), })
[/software/ion] permanent link
As I got back from holiday and restarted my computer, Firefox decided it needed to update itself, and in so doing seemed to lose all the extensions I had installed. Fortunately, I had previously installed FEBE and CLEO on a Windows box and had saved all the extensions I use. This meant that all I needed to do was install one xpi file and then let Firefox update all the extensions.
I also have a new Ubuntu installation, and loading that xpi file into Firefox also loaded all the extensions I use. Then I just had to go into about:config (type it into the location bar) and change middlemouse.ContentLoadURL to true so that pasting a URL into the window loads the URL. (That's actually a lower case c in "Content" above, but kwiki doesn't seem to want to let me write that.)
Then I changed editor.singleLine.pasteNewlines to 3 so that URLs split across lines load correctly. The other values here seem to be:
0: Paste content intact (include newlines) 1 (default): Paste the content only up to (but not including) the first newline 2: Replace each newline with a space 3: Remove all newlines from content 4: Substitute commas for newlines in text box
[/software/firefox] permanent link
Mon, 29 May 2006
Retraining SpamAssassin's Bayesian classifier
I use SpamAssassin to filter my mail, and in general I am very happy with it. SpamAssassin classifies mail according to various criteria and assigns each message a score. A score of between five and ten earns a message a place in my probablespam mailbox, and above ten sends the message straight into the caughtspam mailbox. Any mail getting this far that is not to a name that I recognise goes into the not_me mailbox. Anything left goes into my inbox.
The has worked very well for me. Very rarely do I find spam in my inbox, and real mail ends up in caughtspam so rarely that I never look in there except for when someone insists they have sent me mail that I can't find. The probablespam mailbox is mostly spam, but occasionally I find some real mail in there. The not_me mailbox contains some spam along with messages I have been bcced on.
But recently I have been finding more real mail in my probablespam mailbox. Almost invariably these messages have been classified as BAYES_99, meaning that the SpamAssassin Bayesian classifier thinks the message is almost certainly spam. It's been a long time since I first trained SpamAssassin so I wondered whether the database had been polluted. This is often known as Bayesian poisoning, and is part of the goal of messages you might see which contain a poem or part of a story or just a long list of random words.
So I decided to retrain the Bayesian classifier to see if it could do any better. First I backed up the current database, then trained it on ham and spam.
$ sa-learn --backup > /var/tmp/sa.db $ sa-learn --clear $ sa-learn --ham --progress --mbox ~/Mail/new ~/Mail/tips $ sa-learn --spam --progress --mbox ~/Mail/probablespam ~/Mail/spam
Early results are encouraging. A few mistakes of course, but that is to be expected until I train it a little better. But fixing the problems is easy in mutt:
$ grep sa-learn ~/.muttrc macro index S "|sa-learn --spam\ns=spam\n" macro pager S "|sa-learn --spam\ns=spam\n" macro index H "|sa-learn --ham\ns=new\n" macro pager H "|sa-learn --ham\ns=new\n"
As an aside, I wonder what SpamAssassin has to do with Apache?
[/software/spamassassin] permanent link
Sat, 27 May 2006
I noticed recently that my SVN::Web pages had stopped working. Today I found a little time to investigate. My apache error log said:
Can't locate object method "caught" via package "SVN::Web::X" at /usr/local/share/perl/5.8.7/SVN/Web.pm
Nice.
I remembered having a bit of hassle installing it first time around primarily because it wasn't ready for Apache2, so I punched "SVN::Web Apache2" into Google, and surprised myself when I noticed that my notes page was the second hit. It was top on MSN.
Aha! So that's the reason I write these notes!
My notes told me which modules I could let debian install and which I had to manage myself. They also told me the hacks I had made to make things work with Apache2.
So in these situations I normally make sure I'm running the latest versions of everything. The bug I'm chasing might already be fixed. The first thing I noticed was that there was a new version of SVN::Web iteslf. So I installed it.
Since I had first installed SVN::Web debian had upgraded from Perl 5.8.7 to 5.8.8, so the latest SVN::Web installed into a slightly different directory. During the installation it told me that Exception::Class was out of date and asked if it should be updated. I declined since currently Exception::Class was installed as a debian package and I was hoping it could stay that way. (In fact, I had also installed it from CPAN, but didn't read enough of my notes to notice that.)
After installing the latest SVN::Web, I tried running it, just to see what happened. I was expecting loads of errors since my hacky patches were now lost. In fact, I got exactly the same error as before. Good News! That seemed to show that SVN::Web now works with Apache2, and my hacks were no longer required.
But the original problem remained. So I installed the latest Exception::Class, which hadn't yet made it into debian, tried again and everything just worked.
Wonderful!
Now, if only there was a debian package of SVN::Web so that someone else could worry about all this.
So once again I battled SVN::Web and debian and I prevailed! And once again you can see the results at svnweb.
[/revision_control] permanent link
Sat, 29 Apr 2006
I wanted to build a local CPAN mirror using CPAN::Mini. Since it takes a little time to download all that data I wanted to choose a nice fast mirror. This little command was helpful:
$ netselect -vv `wget -O - http://www.cpan.org/SITES.html | \ perl -lne 'print $1 while m!>((ht|f)tp://[^<]+)!g'` | \ sort -k 4 -n
It grabs the CPAN mirrors file, extracts the URLs and feeds them to netselect, which tests the mirrors and outputs its information. This is then sorted numerically on the fourth field, which is the number of hops.
The sorting is necessary because although netselect does tell you which mirror it thinks is the best, it doesn't really select very well. In fact, some of its output seems downright dodgy, so I selected a mirror which seemed plausibly fast, close and reliable. For me that was http://cpan.wanadoo.nl/.
For useful information on using CPAN::Mini, take a look at Mark Fowler's 2004 Perl Advent Calendar. You might need to get that from the Wayback Machine at http://web.archive.org/web/20060214214713/http://perladvent.org/2004/5th/ (which is another URL kwiki has managed to mangle).
[/software/perl] permanent link
Tue, 04 Apr 2006
ORA 01081 "cannot start already-running ORACLE - shut it down first"
I was creating an Oracle database and got the error message:
ORA 01081 "cannot start already-running ORACLE - shut it down first"
however, I had already stopped everything, and ps showed there were no oracle processes running. It turns out that there were some IPC semaphores and shared memory identifiers which had to be killed before the database could be created.
In my case,
ipcs -a | grep dba
showed the resources that needed to be killed, and ipcrm with the appropriate options kills them. The ID of the resource to kill is in the second column.
ipcs -a | grep dba | perl -ane 'system "ipcrm -$F[0] $F[1]"'
[/software/oracle] permanent link
Mon, 03 Apr 2006
Making a Solaris UFS filesystem on a file
I wanted to set up a temporary database for testing purposes, and the scripts I needed to use to create an Oracle database wanted to use a couple of filesystems exclusively. Well, I didn't have a couple of spare filesystems lying around so I set about creating them on a couple of files. It turns out to be a fairly simple process. Here's how it worked for one of the filesystems.
# mkfile 2000m /myspace/fs1 # lofiadm -a /myspace/fs1 /dev/lofi/1 # newfs /dev/lofi/1 newfs: construct a new file system /dev/rlofi/1: (y/n)? y /dev/rlofi/1: 4095600 sectors in 6826 cylinders of 1 tracks, 600 sectors 1999.8MB in 214 cyl groups (32 c/g, 9.38MB/g, 2368 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 19232, 38432, 57632, 76832, 96032, 115232, 134432, 153632, 172832, 192032, [ ... ] 3993632, 4012832, 4032032, 4051232, 4070432, 4089632, # mount /dev/lofi/1 /dbTST/fs1 # df -k /dbTST/fs1 Filesystem kbytes used avail capacity Mounted on /dev/lofi/1 1981012 11 1921571 1% /dbTST/fs1
Wed, 08 Mar 2006
My laptop has a Swiss German keyboard, and it runs Debian. It also has a Caps Lock key right next to the A key where the Control key should really be. So I have the following in my .xsession file to set the correct keyboard layout and to turn the Caps Lock key into a Control key:
xkbsel 'xfree86(de_CH)' setxkbmap -option ctrl:nocaps
This was working fine until a little while ago when my backspace key turned into a delete key. This was particularly annoying because I have configured my delete key to send ^Ap which is the screen code to move to the previous window. I also lost the ability to type []{}#~ and \ which makes programming in Perl difficult, @ which makes it hard to send mail, and | which makes all sorts of command line operations difficult.
It turns out that somehow xkbsel had been removed. Well, I know how. I did an aptitude dist-upgrade and didn't pay sufficient attention to everything it told me before I hit Y. Or more accurately, Y, then Yes, then Y and then Yes again, as seems to be necessary these days.
So reinstalling that package put my keyboard back into a usable state. But running setxkbmap then gives me a German layout instead of Swiss German:
$ setxkbmap -v 10 -option ctrl:nocaps Setting verbose level to 10 locale is C Applied rules from xfree86: model: pc105 layout: de options: ctrl:nocaps Trying to build keymap using the following components: keycodes: xfree86+aliases(qwertz) types: complete compat: complete symbols: pc/pc(pc105)+pc/de+ctrl(nocaps) geometry: pc(pc105)
and explicitly specifying a Swiss German layout results in the following error:
$ setxkbmap -v 10 de ch ctrl:nocaps Setting verbose level to 10 locale is C Warning! Multiple definitions of keyboard layout Using command line, ignoring X server Applied rules from xfree86: model: pc105 layout: de variant: ch options: ctrl:nocaps Trying to build keymap using the following components: keycodes: xfree86+aliases(qwertz) types: complete compat: complete symbols: pc/pc(pc105)+pc/de(ch)+ctrl(nocaps) geometry: pc(pc105) Error loading new keyboard description
I really don't feel up to fixing this at the moment, so I've gone back to having an annoyingly placed Caps Lock key which I keep pressing at inopportune moments.
Sun, 05 Mar 2006
I use Gallery to display my photos. I am still running version 1. I generally work by using an old version of galleryadd.pl to upload image directories to gallery into a top level Pending folder that no one else can see. There I work on the albums before moving them to other places where they are more generally available.
A little while ago the "Move Album" page stopped working. The dropdown selection was not fully populated with all the albums, and the "Move Album" and "Cancel" buttons were missing.
I eventually got around to investigating what was happening. It turns out PHP was running out of memory. There were messages such as
Allowed memory size of 8388608 bytes exhausted (tried to allocate 177 bytes)
in the apache error.log file.
I fixed the problem by adding the following line to the .htaccess file for Gallery, which on Debian is found at /etc/gallery/htaccess
php_value memory_limit 4500000000
which bumps the memory up from about 8MB to about 4.5GB. Overkill? Maybe. But I have enough swap space, and that should hopefully allow me to create zip files for the albums (which gallery does for you) and they'll be of a suitable size to burn to dvd.
The other thing I always do after Debian upgrades Gallery for me is edit config.php (found at /usr/share/gallery/config.php in Debian) and change PhotoAlbumURL and AlbumDirURL to be relative URLs. Otherwise, for some reason I don't fully understand, Gallery runs extremely slowly for me this side of my firewall.
(Note that I have had to write PhotoAlbumURL and AlbumDirURL even though the actual variable names start with a lower case letter in each case. This seems to be some problem related to kwiki. Simply adding an exclamation mark in front of each variable does not help, even though it is needed for the way I have written the variable names. Does anyone know the solution to this problem? Note that the answer is probably not "don't use kwiki" unless it is accompanied by information on what to use instead.)
Why the Gallery developers don't get rid of the check that those URLs cannot be relative is beyond me. I presume they are saving users from themselves, but I dislike software that thinks it knows better than me.
[/software/gallery] permanent link
Mon, 13 Feb 2006
This is probably obvious to a lot of people, but it's fairly rare that I learn a new Perl programming trick these days, and I don't recall ever having seen this one before, so I thought I would make a note of it here.
Now, some people will tell you that a function should only have one exit point. If you subscribe to that opinion then this trick probably isn't for you. Personally I find multiple exit points can frequently simplify logic and avoid the need for temporary status variables or conditional blocks. They can also be abused of course, but I don't consider that a reason to ban them outright. Maybe this attitude is why I get on so well with Perl.
Anyway, sometimes you have a block from which you would like multiple exit points. If this happens to be a loop you can use last to exit from it. But if it is not a loop you might not want to create a function just for this purpose. Well, the solution turns out to be trivial. Just use last in the block. It's even documented in perldoc -f last.
Note that a block by itself is semantically identical to a loop that executes once. Thus "last" can be used to effect an early exit out of such a block.
I used this for the first time in my calandar and todo script. I'll probably use it again.
[/software/perl] permanent link
Blosxom uses the inode modification time of a file to determine the date that should be used for that file. That's all well and good, but that information can easily be lost with a stray cp command for example, so I wondered about something a little more robust. I suspect it is very likely that someone has done something like this before, and it can probably be better done as a kwiki plugin, but I decided to add a line to each file telling what date should be used, and I wrote a tiny Perl script to set the modification time to that date. Then this script is called by cron every hour and I shouldn't have to worry about editing a file to correct a type, or to add an addendum, for example.
The system line in the script is a hack. I should probably use Perl's utime function, or at least check the return value or use system LIST, but sometimes you just want to get the job done.
So, for example, the start of this file is:
Blosxom Dates meta-markup: date 200602131230 meta-markup: kwiki
The script I wrote is:
#!/usr/bin/perl use warnings; use strict; use File::Find; find(\&wanted, <~/g/blosxom>); sub wanted { return unless /\.txt$/; open my $f, "<", $_ or die "Can't open $_: $!"; while (<$f>) { if (/^meta-markup: date (.+)/) { system "touch -m -t $1 $File::Find::name"; last; } } }
Sun, 12 Feb 2006
I've thought for quite a while that I wanted to sort out my various TODO lists into something more general, and integrate it with a calendar of some sort.
My requirements didn't seem to extreme:
* access from any machine or some way to synchronise machines * calendar and TODO list integrated * let me know whats coming up and what I need to do soon
I looked at Chandler version 0.6.0, which is marked as "experimentally usable" but unfortunately I found it to be unusable. I see 0.6.1 is released now, so that might be better, but I've not tried it. But Chandler seemed to be overkill for what I wanted anyway.
So I started looking at command line applications. I experimented with ccal which seems quite nice. It's written in python and I even started hacking on it to make it do some more of what I wanted and some more of what it should have done, but in the end it just didn't do enough of what I wanted. Here's my patch, anyway:
--- /home/pjcj/utils/ccal06.py.org 2004-08-29 18:01:59.000000000 +0200 +++ /home/pjcj/utils/ccal 2006-02-12 17:38:08.230850964 +0100 @@ -182,11 +182,17 @@ print "ccal entries:\n" if cal: print "Calendar:\n" - for item in self._cal.getItems(): - if not str(item.__class__)=="__main__.ccalItem": - item=ccalItem(item) - - print item.entry + for i in range(999) : + viewtime = (datetime.datetime(self._cal.viewtime[0],self._cal.viewtime[1],self._cal.viewtime[2])+datetime.timedelta(days=i)).timetuple() + entries=self._cal.getItems(viewtime) + dateString = time.strftime(self._cal.dateformat,viewtime) + dateString += " ("+self.friendlyDateTimeDelta(datetime.datetime(viewtime[0], viewtime[1], viewtime[2]) - datetime.datetime(self._cal.localtime[0],self._cal.localtime[1],self._cal.localtime[2]))+")" + if entries!=None and len(entries) > 0: + entries.sort(lambda x, y: cmp(ccalItem(x).entry, ccalItem(y).entry)) + for item in entries: + if not str(item.__class__)=="__main__.ccalItem": + item=ccalItem(item) + print dateString+": "+item.entry print "\n" if todo: print "Todo list:\n" @@ -523,6 +529,7 @@ ypos+=1 + entries.sort(lambda x, y: cmp(ccalItem(x).entry, ccalItem(y).entry)) for entry in entries:
Then I started to take a look at some vim plugins hoping for better luck. I found a couple that individually seemed to do part of what I wanted. First there was VimOutliner, which will do nicely for managing my todos. Then there was calendar.vim, which will deal with the calendar part. Now all I needed to do was join them up so that todos with a date went into the calendar. I little bit of Perl sorted that out. So now I have the infrastructure. All I have to do now is use it.
#!/usr/bin/perl # Copyright 2006, Paul Johnson (http://www.pjcj.net) use warnings; use strict; use File::Find; my $dir = <~/g/calendar>; chdir $dir or die "Can't chdir $dir: $!"; my %todos; my $outline = "pjcj.otl"; # Read the outline file and note any entries that have associateed dates. open my $f, "<", $outline or die "Can't open $outline: $!"; while (<$f>) { # Dates have the format YYYY-MM-DD push @{$todos{$2}{$1 eq "X" ? "done" : "open"}}, $3 if /^\s*(?:\[(.)\])?\s*(?:\d*%)?\s*(20\d\d-[01]\d-[0-3]\d)\s*(.+)/; } my ($y, $m, $d) = (localtime)[5, 4, 3]; my $today = sprintf "%04d-%02d-%02d", $y + 1900, $m + 1, $d; print "Today is $today\n"; sub write_calendar { my ($cal) = @_; # Get the date from the filename. my ($y, $m, $d) = $cal =~ /\d+/g; my $date = sprintf "%04d-%02d-%02d", $y, $m, $d; my %entries; if (-e $cal) { # Read in the calendar file, ignoring any existing todos. open my $f, "<", $cal or die "Can't open $cal: $!"; my @l = grep /\S/ && !/^(?:TODO|DONE) /, <$f>; chomp @l; @entries{@l} = (); # Delete the file - there may be nothing more to write to it. unlink $cal or die "Can't delete $cal: $!"; } # Add the todos from the outline. @entries{map "TODO $_", @{$todos{$date}{open}}} = (); @entries{map "DONE $_", @{$todos{$date}{done}}} = (); { # If there's nothing to be done, just get out. last unless %entries; # Write the new calendar file. open my $f, ">", $cal or die "Can't open $cal: $!"; print $f "$_\n" for sort keys %entries; # Print out what's coming up. # Don't print if it's in the past and there are no todos, # or if it's just todos and they are all done. last if $date lt $today && !@{$todos{$date}{open}}; last if keys %entries eq @{$todos{$date}{done}}; print "$date\n"; print " $_\n" for sort keys %entries; } # We're finished with this date. delete $todos{$date}; } sub wanted { # We're only interested in calendar files. return unless -f; unlink, return if -z; return unless /\.cal$/; write_calendar $_; } { no warnings "numeric"; # Go through the calendar files in chronological order. find({ wanted => \&wanted, preprocess => sub { sort { $a <=> $b } @_ }, no_chdir => 1 }, <20??>); } # Now take the remaining todos and turn them to the calendar files. for my $date (sort keys %todos) { # Locate the calendar file from the date. (my $cal = $date) =~ s|-0?|/|g; $cal .= ".cal"; write_calendar $cal; }
Fri, 03 Feb 2006
I wanted to backup my MediaWiki each night and found that a little addition to my crontab would do the trick quite nicely:
05 05 * * * mysqldump --user=wikiuser --password=s3kr1t --single-transaction --all-databases | bzip2 > /path/wiki.`date +\%Y\%m\%d`.sql.bz2
Since this is on Solaris, the % signs need to be escaped, otherwise they
represent newlines. But, when I tried to use `date '+\%Y\%m\%d'`
the
backslashes got left in the output too. Smells like a bug to me.
I should probably backup the actual MediaWiki files too. Oh, and I'll have to clean out the backups every so often, I suppose.
Wed, 01 Feb 2006
Recovering SVN bdb Repositories
I looked at a few of my older SVK repositories and found that debian had upgraded bdb under me and so the repositories couldn't be read. The error message was something like:
Berkeley DB error: Berkeley DB error for filesystem /home/pjcj/g/svk/testr/db while opening environment: DB_VERSION_MISMATCH: Database environment version mismatch: bdb: Program version 4.3 doesn't match environment version
The solution was to find a copy of svnadmin which was statically linked to the older library and use that to help upgrade the repository. While I was there, I moved the remainder of my repositories to fsfs.
I found such an svnadmin binary at uncc.org after which the sequence of commands is:
$ /path/to/static/svnadmin recover repository $ /path/to/static/svnadmin dump repository > repository.dmp
At this point you could upgrade your bdb repository if you wanted to. I just blew it away and created a new fsfs repository. (Well, I was a little more careful.)
$ mv repository repository.bdb $ svnadmin create repository $ svnadmin load repository < repository.dmp
And then everything worked again.
[/revision_control] permanent link
Here's how I set up my base utilities on a new machine. It's not optimal by a long shot, but it's pretty easy.
% aptitude install svk $ cd ~ $ svk depot --init $ svk cp http://svk.server/svn/base $ rm -r base $ cd g $ svk co //base/trunk base $ cd .. $ ln -s g/base/* g/base/.* .
[/revision_control] permanent link
It's really about time I made my SVK repositories public, and as a first step towards that I decided to install SVN::Web. I'm running debian, and since the Perl SVN bindings are such a pain to install I am using the system perl to run SVK. This means that I also need to use the system perl to run SVN::Web.
The first thing I did was to install debian's mod_perl. That wasn't too hard:
# aptitude install libapache2-mod-perl2
and everything still seemed to work. Now the problem is that SVN::Web isn't packaged for debian, so the installation is going to need to be an unholy mix of debian packages and CPAN modules. I decided to try to install as many of the debian packages as I could and install the remainder from CPAN. So, the debian packages I installed were:
# aptitude install libtemplate-perl libpod-coverage-perl \ libtest-differences-perl libmodule-build-perl libtext-diff-perl \ libxml-rss-perl libexception-class-perl libtemplate-perl-doc \ libapache2-request-perl libnumber-format-perl \ libtemplate-plugin-clickable-perl libemail-find-perl
They might not all be strictly necessary.
Then I needed to install some modules from CPAN:
# perl -MCPAN -e shell cpan> install SVN::Web ... cpan> install Exception::Class ... cpan> install Devel::StackTrace ... cpan> install Template::Plugin::Number::Format ... cpan> install Text::Diff::HTML ... cpan> install Template::Plugin::Clickable::Email ...
I had to install Exception::Class from CPAN since the debian version was too old. Without Text::Diff::HTML the process would just sit there eating all the CPU when you asked for HTML diffs. Without Template::Plugin::Clickable::Email the error log filled up saying it wasn't there.
Then I added to /etc/apache2/sites-available/default :
<Directory /var/www/svnweb> AllowOverride None Options None SetHandler perl-script PerlHandler SVN::Web </Directory> <Directory /var/www/svnweb/css> SetHandler default-handler </Directory>
Then I had to hack on SVN::Web.pm itself to make it work with Apache2. The diff below seems to work for me, though it might well be either overkill or underkill.
--- /usr/local/share/perl/5.8.7/SVN/Web.pm.org 2006-01-30 20:37:46.000000000 +0100 +++ /usr/local/share/perl/5.8.7/SVN/Web.pm 2006-01-30 22:50:42.000000000 +0100 @@ -861,15 +861,17 @@ sub handler { eval " - use Apache::RequestRec (); - use Apache::RequestUtil (); - use Apache::RequestIO (); - use Apache::Response (); - use Apache::Const; - use Apache::Constants; - use Apache::Request; + use Apache2::RequestRec (); + use Apache2::RequestUtil (); + use Apache2::RequestIO (); + use Apache2::Response (); + use Apache2::Const; + use Apache2::Const; + use Apache2::Request; "; + die $@ if $@; + my $r = shift; eval "$r = Apache::Request->new($r)"; my $base = $r->location; @@ -921,7 +923,7 @@ } mod_perl_output($cfg, $html); - return &Apache::OK; + return &Apache2::Const::OK; } =head1 SEE ALSO
So I battled SVN::Web and debian and I prevailed! You can see the results at svnweb.
[/revision_control] permanent link
Mon, 09 Jan 2006
For an awfully long time I have wanted to have a nice way to make notes and store little bits of information, mostly about things that I have done so that I know how to do it again. This repository of tips and tricks has mostly resided in my .zshhistory file, but whilst that has some advantages, it also has many disadvantages.
Then people started blogging. And I didn't. Mostly for the reasons discussed in why you should blog, which Mark Dominus mentioned was influential in his decision to start a blog. So having read that piece I decided not that I would blog, but that I could use a blogging tool to organise some of the notes and little bits of information I want to store. Do you see the difference?
So anyway, I installed blosxom on this system. It is fairly bland, out of the box. I'm no graphic designer, by any stretch of the imagination, but I did get it to use my standard CSS file, which brightened things up a little. I suspect I'll do some more customisation at some point in the future.
I also installed a couple of plugins. The first is
kwiki, which saves me having to
write HTML, and the second (because the first requires it) is
meta, which allows me to specify
that an entry is written in kwiki markup. I had to hack the kwiki plugin a
bit to get the thing to work. It seems that the start
sub is called to
determine whether to use the plugin, and the the start
in kwiki was using
information that was only available after running the story
sub in meta,
which happens some time later. So I changed kwiki to look like:
14 sub start { 1 } 15 16 sub story { 17 my($pkg, $path, $filename, $story_ref, $title_ref, $body_ref) = @_; 18 19 $$body_ref = CGI::Kwiki::Formatter->process($$body_ref) 20 if $meta::markup eq 'kwiki'; 21 22 return 1; 23 }
which works a lot better. I'm not sure how much I like kwiki syntax yet. One thing I have noticed is that kwiki links need to be all on the same line, which makes some lines a bit long (I generally stick to 80 chars) and sometimes stops me formatting paragraphs with gqap in vim. But at least the URL is the first part of the link, so being generally fairly long it will often be placed on a new line anyway.
I recently installed MediaWiki at work and like it a lot. (Being solely for internal use, I don't have to worry about security problems with PHP.) I think I'd like to use MediaWiki syntax coupled with Template Toolkit, but we'll see how we go for now.
So how's that for a first entry?