Wednesday, December 16, 2009

Filling in PDF forms using Open Office - Mail Merge Style

We recently changed an HR related supplier at work. As a result, every employee (100+) had to fill in a three page form. The new supplier provided us a non-editable PDF.

As anyone working in a mid-sized company knows, getting 100 people to do the same thing correctly is pretty much an exercise in futility :-)

Now 98% of that form HR knew the answers too - name, address, date of first employment etc. etc. Open Office can do a 'mail merge' from the spreadsheet with that data in it to a document in Writer, but can't import PDF natively.

My first thought was to use the pdf import extension for Open Office. It works ok, but imports the PDF as a Draw document, rather than a Writer document. Draw documents don't do mail merges as far as I can see.

My second thought was to create a Writer document, and use the form as the page background. Turns out setting a page background with an image works ok, but the same image is repeated on every page. Won't work for our multipage document unless we created three documents and somebody played manual collator for a while. Sure as heck something would get scrambled.

What I did in the end was time consuming but worked well.

  1. Convert each page of the pdf form into a tiff or similar high quality image file
  2. Register your spreadsheet or database as a data source.
  3. Start with a new Write document. Hit return enough times to create number of pages you need.
  4. Choose Insert -> Frame.

    1. Make sure it's anchored to the page
    2. Positioned for the entire page
    3. Name it on the Options Tab (optional, but handy)
    4. Set borders to none on the Borders tab
    5. On the Background tab change it to Graphic instead of Color, and browse for your first page image.
    6. Move the image around until it covers the entire page properly
    7. Right click and choose Alignment -> Back
    8. Hit OK
    9. Repeat for each page

  5. Choose View -> Data Sources and highlight your data source so the fields you want are displayed at the top of your window.
  6. Choose Insert -> Frame.

    1. Make sure it's anchored to the page
    2. Make sure auto size is off
    3. Name it on the Options Tab (optional, but handy)
    4. Set borders to none on the Borders tab
    5. Hit OK

  7. Now drag the Heading from the first column of data into that frame
  8. Move the frame to the right spot on the background to fill in the blank
  9. Lather, rinse repeat for each piece of data on each page


When you choose 'Print' it will ask you if you want the blanks filled in, and away you go!

I haven't posted any screenshots - Let me know if you think you need them and I can create some.

Hope this helps somebody!

Tuesday, December 1, 2009

Installing a Samsung SCX-4521F in Linux - Ubuntu Hardy Heron actually

I've seen a lot of comments across the web about this particular printer and getting it and the scanning working in Ubuntu.

It was easy for me - here's my simple checklist
Note I'm using a USB interface.


  1. Get the Samsung Unified Driver version 3
  2. Install it via tar xvzf SamsungXXX.tar.gz && cd cdroot/Linux && sudo ./install.sh. I did it without using X at all, I gather there's a little GUI that doesn't ask anything text install doesn't.
  3. Curse Samsung for putting the freaking icon on your desktop and the root of your applications menu without asking
  4. Clean up the crap you just cursed about.UPDATE forgot to mention the specifics - /bin/Desktop /bin/.gnome-desktop /usr/sbin/Desktop /usr/sbin/.gnome-desktop Yes, that install script has some bugs in it!
  5. Try a test page. Samsung 'helpfully' decided to make the new printer the default, so lpr favourite.pdf should work fine.
  6. Try scanimage -L and see if you can see the printer. If not (bet you can't) try sudo scanimage -L. If that works, it's a permissions problem.
  7. Add all the appropriate users to the lp group. e.g. sudo addgroup joeUser lp. The installer says it's doing something like this, but whatever it does doesn't work. Ubuntu has the handy scanner group as well, but the lp group ends up owning /dev/usb/0. Rather than muck about with updated udev rules, I just added the users to both the scanner and lp groups and it works fine.
  8. Log out of that terminal session and start a new one. A terminal doesn't pick up updated group membership until it logs in the next time. Dunno whether that applies to the 'GUI' users and group tool or not. Let me know....
  9. Use scanimage -L to see if sane can see the scanner.

Wednesday, November 25, 2009

Large Drives and USB Enclosures

OK, this is definitely a talk to the rubber ducky kind of post.

I have an old Mandrake box that various clients send their backups to overnight. I hook up an external USB drive and run a script that basically rsyncs the various backups to the external drive to be taken off-site. Been doing it this way for a while now.

I've upgraded the size of the drive in the external enclosure a couple of times, 160G to 320G to 1TB recently. When I first got the 1TB enclosure, I partitioned it with an iMac, then remembered I couldn't format ext3 with the Mac, so I connected it to a linux box and formatted it, but left the partitioning alone. Turns out the iMac partitioned it GPT. I only realized this after I'd been using it for a couple of days. This is in the logs.
kernel: /dev/scsi/host39/bus0/target0/lun0:<4> Warning: Disk has a valid GPT signature but invalid PMBR.
kernel: Assuming this disk is *not* a GPT disk anymore.

It was working, I didn't have a handy spot to move the large amount of data on the disk to, and fixing it seemed like it would probably destroy data, so I left it for a bit. I tested restoring data from the drive to be sure, and everything was working fine.

A couple of weeks later, during a really busy spell, I noticed the daily rsync jobs turned deadly slow. Instead of 6MB/s I'm getting 300k or less. Almost like it was swapping to USB1.1 instead of 2.

I twiddled with it for a bit, and since it was somewhat intermittent I tried replacing the USB controller (it's an add on card) in the server. Over the next couple of days I tried another USB enclosure, another USB cable, and copying files from different server. The problem kind of came and went, so no one change seemed to be the fix.

Since this is the off-site backup, I had started using another (smaller) drive to copy the recent information to - this setup was working fine. I bought another 1TB drive and enclosure, both different brands than the existing units, partitioned and formatted it correctly from a linux box, and started using it. It was running fine at full speed. When I had time I started copying the older archive info that was only on the first TB drive over as well.

Now this morning that new drive started running slow....

The only thing that makes any sense at all to be at this point is some problem with USB enclosures and large drives - e.g. once it gets past 650meg or so they start having problems? But the two enclosures are completely different chipsets - one has IDE + SATA, the other is SATA only. It can't be that widespread an issue can it?

OK, so now I've typed all this out, the ducky still hasn't said anything...

Anybody else want to comment?

Wednesday, November 18, 2009

Finding OSX Aliases in a SMB/CIFS Share

Just a quick follow up to my previous post.

In case it wasn't obvious - if you want to find all those OSX alias files it's just a simple

grep -r XSym /path/to/smbshare/on/server


Updated:

If I had more of them I'd hack up a script to pull the path info out of them and create the symlinks automatically....


This is really hacky, but you can make symlinks out of alias files with it.
If a dragon appears and demands a dumptruck for a four-wheeler, don't blame it on me!


read NAME && LINK=$(tail -n 2 $NAME | head -n 1) && rm -v $NAME && ln -sv "$LINK" "$NAME"

Samba Unix Extensions and Following Symlinks with OSX Leopard and Ubuntu Hardy

Sorry for that title - just trying to help the Google bot help the future.

So here's the problem. When we upgraded the bulk of the Macs at work to Leopard, symlinks on the Samba share quit working. They still worked in Windows, and on Hardy clients, but not Leopard.

If you created an 'alias' in Leopard, then the Macs could follow it, but the Linux boxes not.

Lots of people have the same or similar issues -
A B C D E F

Turns out Leopard is the first OSX client to support Unix extensions in CIFS.
So the client ends up trying to follow symlinks *locally* rather than on the server. 'Dumber' clients like Windows or Gnome Nautilus smb:// don't do Unix extensions, so the server resolves the symlinks for you.

The 'fix' is to set 'unix extensions = no' globally in the smb.conf file.
Now the OSX clients won't get the unix info either, and the server will go back to resolving symlinks for them.

The drawback to this fix as I understand it is that SMB with unix extensions on is a complete replacement for NFS in that the full range of unix permissions can be provided to unix clients - it's not limited anymore. You would need some external system to keep UID/GIDs syncronized, but NFS has that issue too.

What I still don't understand is how to get symlinks to resolve to the correct spot on the server when unix extensions *is* on. E.G. if I was using a 'smart' linux client (like mount.cifs and the /etc/fstab file?) how do I get symlinks to work? Comments appreciated, as I've spent too much time on this issue at the moment anyway.

Brian

PS - I also found out that when you create an 'alias' in OSX on a SMB share, the file it creates is actually in Minchell and French format - see this page that talks about creating symlinks on Windows servers. That's what OSX does on a linux server as well. Weird the linux client doesn't follow it!

Monday, September 14, 2009

Adding Java apps to the Motorola K1m For Free

I've got a Motorola K1m running on the President's Choice Financial pay as you go program. (It's resold Bell Mobility services) Cheap and reliable!

It has a built in browser that works ok for some things, but won't cooperate with the identi.ca microblog I post to.

I tripped over an interesting pair of applications that looked like they might work from the Substance Of Code blog. Mobidentica and Twim. Now how the heck can I install those on my phone? It's a somewhat locked down system. :-)

Turns out the latest version of Bitpim (1.06) for Leopard will actually talk to my phone. I'd tried with earlier versions and had no luck, but this time I was able to get a working connection.

Since Bitpim won't automatically discover the phone, what I found to work was this - use a USB cable, plug it in, and under preferences you'll have to manually specify the K1m and pick a COM port. Go for the one marked 'modem' - in my case it was /dev/cu.usbmodem5d11

Now, under the 'View' menu turn on the 'View File System' option. Pick the new 'File System' entry on the left side of the main window pane and start twisting the little triangles next to the '/' entry. Confusingly only files show in the next column, sub directories only appear under the '/' entry AFTER you twist the triangle. Harder to explain than do - just try it.

Navigate to /brew/mod/jbed/preinstall Don't worry about the hacky looking path, that's really what's already there. Right click in the last column and 'add files' to add the .jad and .jar files you download from the Substance Of Code site. You'll see the progress on the bottom right as it uploads the files to your phone. Quit Bitpim, and start Java on your phone. It will discover and compile the programs, quit Java and then restart on it's own. Now your application is available along with the sample games etc.

See this post for most of the info I've just provided here.

I'm sure this will apply to other Java phone applications too. I'll be keeping an eye out for more possibilities! Hope that helps someone.

Brian

Friday, September 4, 2009

Opening up an Intel iMac 17"

I just replaced the hard drive on a 17" Intel iMac. (Last one before the grey and black models). It's amazing how unfriendly it is to disassemble compared to the iMac G5. If this is your first time disassembling stuff, don't start with this model iMac. :-) If doing stuff like this is normal for you then don't be scared of it, just be careful.

What isn't clear from the pictures and movies I saw on the internet is how to release the casing around the iSight camera. One video I saw said "shake it and it'll come loose". Ummm... NO.

Here's the picture you need to see.




The two top clips are actually spring loaded hooks. Lift the thick part of the metal and they unhook.

FYI
Brian

Friday, August 28, 2009

Apple PowerBook G4 400MHz xorg.conf Debian Lenny

More of a self note so I can find it again someday...

If you aren't getting X with a default Debian Lenny install on a 400MHz Apple PowerBook G4 Titanium ATI Rage Mobility 128 M6 (got all that Google?)

Try this xorg.conf - the 'Modes' section and the UseModes line are the additions I think - I kept tweaking it until it stopped being broken so other things might not quite be standard...

# xorg.conf (X.Org X Window System server configuration file)
Section "InputDevice"
Identifier "Generic Keyboard"
Driver "kbd"
Option "XkbRules" "xorg"
Option "XkbModel" "macintosh"
Option "XkbLayout" "us"
EndSection

Section "InputDevice"
Identifier "Configured Mouse"
Driver "mouse"
EndSection

Section "Device"
Identifier "Configured Video Device"
Driver "radeon"
BusID "PCI:0:16:0"
EndSection
Section "Modes"
Identifier "Modes0"
Modeline "1152x768" 64.994 1152 1178 1314 1472 768 771 777 806 +HSync +VSync
EndSection

Section "Monitor"
Identifier "Configured Monitor"
UseModes "Modes0"
EndSection

Section "Screen"
Identifier "Default Screen"
Device "Configured Video Device"
Monitor "Configured Monitor"
EndSection

Thursday, March 19, 2009

A (long) day in the life of a sysadmin....

Now sit right back and you'll hear a tale...

[digression]
Isn't it criminal that Baywatch is the first link in Google on that phrase?
Goes to demonstrate that they just aren't that great at search, just better
than the alternatives!
[end digression]


I was looking at my daily logs. You do that too right? It's very helpful to keep you ahead of the game and knowing what's going on rather than playing catch up all the time. On the other hand it can be pretty darned tedious. I use logwatch of course to make it bearable.

Lately I've been rolling out Ubuntu Hardy desktop boxes. I remarked to myself that since I'm monitoring the drive space using Nagios, the disk space report on every log file was filler I really didn't need to be looking at. And hey, I like fortune as much as the next guy, but once you've seen a dozen or so you really don't need anymore for the day. Why not turn that stuff off?

Now I knew that logwatch configuration seems to be a bit, well, baroque,

but hey, I can handle it right?
[that's called foreshadowing]

Heck - I'd even seen an article about managing your log files that mentioned logwatch on my RSS reader. Let's do this!

[The road to hell...]


==> man logwatch
==> view /usr/share/doc/logwatch/README
==> view /usr/share/doc/logwatch/HOWTO-Customize-LogWatch

OK, the default files are in /usr/share/logwatch, and the system local ones go in /etc/logwatch.
Let's go look.
==> cd /etc/logwatch
poke around
It's empty. Five folders, no conf files. Folders with .conf in their name though! Weird, but true. Lets go check out /usr/share/logwatch
[poke around]
Ok, the stuff is all here - and reading the docs at /usr/share/doc/logwatch tells me that these are the defaults, and stuff put in /etc overrides them. Since logwatch is used on more than just Linux, it seems a bit different, but obviously flexible. Let's peruse the logwatch.conf file - it's well commented.
# You can also disable certain services (when specifying all)
Service = "-zz-network" # Prevents execution of zz-network service, which
# prints useful network configuration info.
Service = "-zz-sys" # Prevents execution of zz-sys service, which
# prints useful system configuration info.

Look - they've disabled two services 'zz-network' and 'zz-sys' by default. I wonder what they do?
[Cue the music - the abyss opens]

Well, it says they are useful - lets use them!
A nifty feature of logwatch is you can just keep running it over and over again from the commandline with different options and see what the output will look like. Once you get it tuned up, you adjust the conf file to match and cron takes it from there.
==> sudo logwatch --print
#### Logwatch 7.3.6 (05/19/07) ##
Processing Initiated: Thu Mar 19 14:52:42 2009

[snip 172 lines of detail I see waay too often as it is....]

==> sudo logwatch --print --service 'zz-network'

[snip interesting network info]

OK, so that one's interesting. What about zz-sys?
==> sudo logwatch --print --service 'zz-sys'
---- System Configuration Begin ---
No Sys::CPU module installed.
To install, execute the command:
perl -MCPAN -e 'install Sys::CPU'
No Sys::MemInfo module installed.
To install, execute the command:
perl -MCPAN -e 'install Sys::MemInfo'

Huh, missing modules. Perhaps that's why it's disabled. Let's just make sure
==> cd /usr/share/logwatch/scripts/services
==> ./zz-sys
No Sys::CPU module installed.
No Sys::MemInfo module installed.

OK. Let's install those missing modules. Now, I don't have anything against CPAN, but since these are a bunch of pretty much identical machines that need to be kept that way, keeping to the repositories, or at least .deb files is definately the right way to go.
==> sudo aptitude search meminfo
p python-meminfo-total

Close, but no cigar. Try again.
==> sudo aptitude search perl

[snip snip snip]

No! - don't do that! 1500 lines of stuff...
==> sudo aptitude search perl | grep mem

[snip 8 lines - nothing we want]

OK. So we need to make our own debs. This Debian Admin article is perfect.

Step one - Head over to CPAN and download the source. Their seach box makes it easy to find the links.
==> wget http://search.cpan.org/CPAN/authors/id/B/BU/BURAK/Sys-Info-0.69_07.tar.gz
==> wget http://search.cpan.org/CPAN/authors/id/S/SC/SCRESTO/Sys-MemInfo-0.91.tar.gz

Step two - install dh-make-perl
==> sudo aptitude install dh-make-perl
Need to get 5813kB of archives. After unpacking 21.3MB will be used.
Do you want to continue? [Y/n/?] y

Wow - 32 packages of dependencies! Well, that's what apt is for right? Go!
[hundreds of lines scroll by while I go grab coffee. That's a mistake...]

Building tag database... Done

OK - next step
==> tar xvzf Sys-Info-0.69_07.tar.gz
==> tar xvzf Sys-MemInfo-0.91.tar.gz
==> dh-make-perl Sys-Info-0.69_07
Searching for Sys::Info::Driver::OSID package using apt-file.
E: The cache directory is empty. You need to run 'apt-file update' first.
Searching for Sys::Info::Base package using apt-file.
E: The cache directory is empty. You need to run 'apt-file update' first.
Needs the following modules for which there are no debian packages available
- Sys::Info::Driver::OSID
- Sys::Info::Base

Now I was just plain dumb. I missed the point of the apt-file update message altogether by reading it as apt-get update.

OK, so I need a couple of dependencies too. Man, would be nice if these were in the repositories.
[Have you hugged a packager today? If not, did you buy one beer? or at least say thanks? Just wondering.....]

==> wget http://search.cpan.org/CPAN/authors/id/B/BU/BURAK/Sys-Info-Base-0.69_06.tar.gz
==> tar xvzf Sys-Info-Base-0.69_06.tar.gz
==> dh-make-perl Sys-Info-Base-0.69_06
Done

OK, now we're cooking.
==> wget http://search.cpan.org/CPAN/authors/id/B/BU/BURAK/Sys-Info-Driver-Linux-0.69_06.tar.gz
==> tar xvzf Sys-Info-Driver-Linux-0.69_06.tar.gz
==> dh-make-perl Sys-Info-Driver-Linux-0.69_06

[snip - still ignoring the message]

Needs the following modules for which there are no debian packages available
- Unix::Processors
- Sys::Info::Base
- Linux::Distribution

Grrr..... Sigh.
==> wget http://search.cpan.org/CPAN/authors/id/W/WS/WSNYDER/Unix-Processors-2.040.tgz
==> tar xvzf Unix-Processors-2.040.tgz
==> dh-make-perl Unix-Processors-2.040

[snip including copyright warning. Glad I'm not a packager that has to worry about these things for everybody else]

Done
==> wget http://search.cpan.org/CPAN/authors/id/K/KE/KERBERUS/Linux-Distribution-0.14.tar.gz
==> tar xvzf Linux-Distribution-0.14.tar.gz
==> dh-make-perl Linux-Distribution-0.14
Done

OK, now to make a deb!
==> cd Linux-Distribution-0.14
==> debuild
The program 'debuild' is currently not installed. You can install it by typing:
sudo apt-get install devscripts
-bash: debuild: command not found
==> sudo aptitude install debuild

[Yep - stupid again. Dunno what the heck I was thinking....]

Couldn't find package "debuild". However, the following packages contain "debuild" in their name:
pdebuild
==> sudo aptitude install pdebuild
Need to get 150MB of archives. After unpacking 312MB will be used.

Yikes - hundreds of depencies and many megabytes of Java looking stuff! Wha?
==> debuild
The program 'debuild' is currently not installed. You can install it by typing:
sudo apt-get install devscripts
-bash: debuild: command not found

[face-palm]

==> sudo aptitude install devscripts
Building tag database... Done
==> debuild
This package has a Debian revision number but there does not seem to be an appropriate original tar file or .orig directory in the parent directory;
(expected liblinux-distribution-perl_0.14.orig.tar.gz or Linux-Distribution-0.14.orig)
continue anyway? (y/n) y

[snippage]
gpg: [stdin]: clearsign failed: secret key not available
debsign: gpg error occurred! Aborting....
debuild: fatal error at line 1174:
running debsign failed

I don't need it signed. How do I disable that?
==> man debuild

Hey - right here in the examples it talks about binary only. Bet it needs the key to sign the code right?
==> debuild -i -us -uc -b
dpkg-deb: building package `liblinux-distribution-perl' in `../liblinux-distribution-perl_0.14-1_all.deb'.

Yea! A deb! Finally. Now to do the rest of 'em!
==> cd Unix-Processors-2.040/
==> debuild -i -us -uc -b
dpkg-deb: building package `libunix-processors-perl' in `../libunix-processors-perl_2.040-1_i386.deb'.
==> cd Sys-Info-Driver-Linux-0.69_06/
==> debuild
- ERROR: Test::Sys::Info is not installed
- ERROR: Unix::Processors is not installed
- ERROR: Linux::Distribution is not installed
- ERROR: Sys::Info::Base is not installed

OK, gotta install these in the right order too!
==> sudo dpkg --install lib*deb
Setting up liblinux-distribution-perl (0.14-1) ...
Setting up libunix-processors-perl (2.040-1) ...
==> dh-make-perl Sys-Info-Driver-Linux-0.69_06
E: The cache directory is empty. You need to run 'apt-file update' first.
The directory Sys-Info-Driver-Linux-0.69_06/debian is already present and I won't overwrite it: remove it yourself.

Now I read the darned message....
==> rm -rf Sys-Info-Driver-Linux-0.69_06/debian/
==> sudo apt-file update
Can't get ftp://ftp.mondorescue.org/ubuntu/dists/8.04/Contents-i386.gz

Huh? Oh, ya, that's an extra repository, won't matter for this.
==> dh-make-perl Sys-Info-Driver-Linux-0.69_06
Needs the following modules for which there are no debian packages available
- Unix::Processors
- Sys::Info::Base
- Linux::Distribution
==> cd Sys-Info-Driver-Linux-0.69_06/
==> debuild -i -us -uc -b
- ERROR: Test::Sys::Info is not installed
==> wget http://search.cpan.org/CPAN/authors/id/B/BU/BURAK/Test-Sys-Info-0.13.tar.gz
==> tar xvzf Test-Sys-Info-0.13.tar.gz
==> dh-make-perl Test-Sys-Info-0.13
Done
==> cd Test-Sys-Info-0.13/
==> debuild
gpg: [stdin]: clearsign failed: secret key not available

Bah - idiot!
==> debuild -i -us -uc -b
dpkg-deb: building package `libtest-sys-info-perl' in `../libtest-sys-info-perl_0.13-1_all.deb'.
==> sudo dpkg --install libtest-sys-info-perl_0.13-1_all.deb
==> cd Sys-Info-Driver-Linux-0.69_06/
==> debuild -i -us -uc -b
pkg-deb: building package `libsys-info-driver-linux-perl' in `../libsys-info-driver-linux-perl_0.69-06-1_all.deb'.
==> sudo dpkg --install libsys-info-driver-linux-perl_0.69-06-1_all.deb
==> dh-make-perl Sys-MemInfo
Cannot find a description for the package: use the --desc switch

Hey, new errors. What fun! At least it's a helpful errror. Let's try...
==> dh-make-perl --desc Sys-MemInfo Sys-MemInfo
Done
==> cd Sys-MemInfo/
==> debuild -i -us -uc -b
dpkg-deb: building package `libsys-meminfo-perl' in `../libsys-meminfo-perl_0.91-1_i386.deb'.
==> sudo dpkg --install libsys-meminfo-perl_0.91-1_i386.deb
Setting up libsys-meminfo-perl (0.91-1) ...

OK - this is IT. The moment of truth! Drumroll please!
==> ./zz-sys
No Sys::CPU module installed. To install, execute the command:
perl -MCPAN -e 'install Sys::CPU'
Memory: 495 MB
Machine: i686
Release: Linux 2.6.24-22-generic

Long pause. Quiet wimpering....
==> sudo aptitude search perl | grep cpu
p libsys-cpu-perl - Sys::CPU Perl module for getting CPU infor

Yes Virginia, there is a Santa Claus.

==> sudo aptitude install libsys-cpu-perl
Building tag database... Done


==> ./zz-sys
CPU: 1 Intel(R) Pentium(R) 4 CPU 2.40GHz at 2392MHz
Memory: 495 MB
Machine: i686
Release: Linux 2.6.24-22-generic

Four lines of output.
[And I still haven't gotten it into the logwatch.conf]

Somedays you're the windshield............

Wednesday, February 25, 2009

Thanks to datainadequate's comment on an earlier post, I've got another book to take a look at - Dive Into Python. I did see mention of it before, but the tag line saying it was 'for experienced programmers' scared me off without looking at it further. What the heck, I'm game this time around.

Also of note, I'm hoping to have current versions of both O'Reilly books Learning Python and Programming Python in my posses ion soon, so once my eyes quit glazing over from the information overload I'll try and post some further reviews of how I think they all stack up.

Maybe I should have been a librarian :-) I wonder if librarians that like too read too much end up like cooks that like to eat too much....

Sunday, February 22, 2009

Stop Vim From Creating Backup Files of bzr_log entries during commit

Here's a handy tip from Paul Brannan. I pulled this from a Launchpad bug report.

I use Vim as my default $EDITOR. When I commit a change using bzr, it opens Vim up and lets me type my comments in.

Unfortunately after I save and edit, it leaves behind a bzr_log backup file with a ~ in that directory too. I want Vim to make those backup files normally, but not when doing bzr commits.

Paul to the rescue:

I solved this with:

~/.vimrc:
filetype on
filetype plugin on

~/.vim/ftplugin/bzr.vim:
set nobackup

(See - it pays to read bug reports! Now go check out all the other ones and let me know the good parts ok?)

Thursday, February 19, 2009

Python - The Never Ending Journey Of A Thousand Miles

Inch by Inch, Row by.... wait a minute, this is Python not Inchworm. Well, whatever.

Note, if you aren't a Monty Python fan it's hard to do the community suggested Python jokes, so you're stuck with my lame stuff instead. Joke writers cost money you know (and I'm free, err.. you know what I mean).

As I mentioned previously, I began learning Python by starting with 'A Byte of Python'. I'm enjoying it, and learning. Eventually I hit some example code showing how objects inherit class local variables, but objects don't share variables among themselves (downwards, not sideways).

The example made sense, and proved his point, but when I ran it on my machine it kicked out a Warning I didn't understand. So, I took my code and the warning message over to the Python IRC channel and they kicked it around a little bit - end result was they didn't like the example that much. They explained the problem to me, and I understood why the warning was there, but I didn't come up with a good way to refactor it so it didn't happen again.

During the conversation, someone suggested looking at Think Python. Turns out that one's available online under a free license as well. Off I went to take a gander.

It's quite good too. I like the way it talks about how to program, as well as how to program in Python. It assumes you have a decent knowledge of geometry and some trig which doesn't suit me all that well (I'd love to go back and enroll in those high school advanced math courses I never took). As I'm going along it has started using a 'turtle' program written in tk. It's kinda interesting to play with, but not my particular cup of tea. All in all it's enough of a contrast to the Byte of Python book I'm going to use both going forward.

I was also given a copy of the 1999 version of Learning Python from O'Reilly. Now I normally like and use O'Reilly books all the time, but this one seems a bit dangerous for a newbie to to use. It covers Python up to version v1.5, and while I don't know how much has changed since then, even a quick flip through shows me the style is different than I've been exposed to so far, and I don't want to start out learning depreciated ideas and syntax styles. Unless somebody tells me different, I'll just keep it around as reference material.

There was another interesting tidbit came out of that IRC channel discussion. At one point I indicated I was starting to get a feel for the syntax, but not the terminology Python users. A poster (sorry! didn't record who you were!) emphasised that getting the terminology right is important! His point was that you have to be able to think about your problems accurately and concisely, and to do that, you have to grok the terminology properly. Fuzzy logic leads to fuzzy programs. I wish I could quote him/her verbatim - it was much more compelling there than I can make it sound now.

Object orientated programming is starting to make more sense as a concept, but I still don't know how to design programs that use the concept properly. Breaking a problem down as a series of steps, then writing code that completes the steps is a logical way to program that that comes naturally to me (and most others I bet). OO doesn't seem to apply to that style properly, but I'm just not sure what to replace it with yet. Pointers welcome.

Completing the exercises (not slavishly, but doing them a little bit my way) has been helpful - picking up common syntax errors etc and getting my head back in programming space. Committing to bzr and pushing to launchpad working very well - just need to improve quality of my comments and get more consistent timing on my commits.


I have to say that the quantity and quality of information combined with the friendly to the newbie atmosphere I'm finding all over the 'Net is really encouraging me that Python truely is a great language to learn and use all the time! Let's keep it rollin'

Wednesday, February 4, 2009

A Space In The Right Place Saves Nine

Experienced programmers please set down any hot liquids before proceeding. The following comments are from a beginning programmer, and as such, will likely promote laughter among those who know better. That's why I'm writing this - you guys just can't regress all the way back to this level of ignorance easily :-)

The one thing everybody knows about Python is that whitespace is significant, and it's really a pain. OK, so now that I can do 'Hello World' in three - count 'em three! - different scripting languages, what do I think about Python syntax?

I just don't see why everybody thinks it's hard. To me, it's a breath of fresh air! So far, the only time I've seen that white space matters is leading whitespace to set off a block of statements.
There's no 'end' to that block except the ending of the indentation. OK, so it's something to keep in mind. On the other hand, bash's use the $ when you assign the value and not when you use it is a lot more confusing. And don't get me started on Perl. When to use a ( vs a { vs a [ is a lot harder to figure out, and $ or % or @ for variables ain't exactly simple either. Yes, Perl logically hangs together, and I get the fact that you should know which symbol to use when based on what you are trying to do, and what kind of data you are storing. I even appreciate the fact that it's a form of error checking - because if you use the wrong symbol because of a logical error it'll catch it for you.

On the other hand, in Python a variable is a variable regardless of the contents. The programmer has to keep it straight in his head rather than the syntax enforcing it. It might make things more error prone, but it sure makes it easier to read. The lack of end of line characters in particular is refreshing, unlike bash's 'you might want to use a semi-colon here - or not...

So, if you were thinking that counting spaces was a necessary evil in Python, forget it, and just try it out and see what you think!

Sunday, February 1, 2009

How's It Going?

When we last saw our brave hero, he'd just emerged victorious from his struggle over the dog pack of FLOSS project hosting choices, but the looming battle against the three headed dog of Python, Bazaar and Object Orientated Programming was yet to begin.... Will he survive to see another day?

Sorry - Rocket Robin Hood flashback. Thanks Teletoon Retro....

Just a quick update on where I'm at. I've set up a project on Launchpad. That was a straightforward click n' drool kinda thing. Just fill in the blanks where required. I spent some very constructive time reading the documentation. It's got lots of information on all the various Launchpad features (and there are quite a few) but I was most struck by the amount of 'why' as well as 'how' in the information. Since I've not been exposed to project management per se before, it was good for me to see how the website was designed around a project flow, rather than the other way around, and expose me to the common steps most projects go through.

I had already registered my GPG key when I signed the code of conduct earlier, so I just had to create and upload an ssh key for use with bazaar. It took me a bit of twiddling to get that working, mainly because I wanted a separate launchpad key from my regular ssh key. My regular key is already used in too darned many places. The twiddling was purely self inflicted - launchpad was working perfectly all along :-)

Next up, I went looking for some docs to get me started on learning Python. A quick inquiry in the (newly created) Python group on identi.ca pointed me at 'A Byte of Python'. Turns out it's licensed CC and available for download.

As I work my way through the examples, I've been creating the them in GVim. Another bit of Googling followed with an identi.ca query found me some pointers for using Python in Vim. As I've created each example and tested them, I've committed them to a +junk repository and pushed it up to Launchpad. This post by Paul Hummer made getting that all set up painless and simple.

Are you seeing a trend here? I'm using Linux, SSH, GPG, GVim, Identi.ca, Python, A Bite of Python, and a blog posting by a Launchpad dev. All of it FLOSS. All of it no charge. All of it quality information and tools - downloaded off the internet by nothing more than the sweat of my little pinkies. It's just amazing when you sit back and think about it. It's downright flippin' awesome in fact.

Thank you one and all that made this happen, and continue to make it happen. Hopefully I can do my little part too, and keep the steamroller rolling.

Wednesday, January 21, 2009

Choosing a FLOSS Project Host

Tune in next week to hear Mrs Piggy say - oh wait a minute, this is next week, kinda.

So, the next step in building the GREATEST APPLICATION EVER (or at least another misc software thingy) is picking a place to host the code. (Those of you not paying attention can find out why we need a that already here.)

What are my choices?
Let's take those in order. (Good thing they are in the right order already, huh?)

Self hosting is out. As I said before, I want to work on the project, not the website, as much as possible. Mediawiki and bugtracker would likely be all I need, but I'd rather someone else do the plumbing on this.

Google Code is easy to say no to as well. Two big knocks here are unreliable downloads (I mirror a Google Code project on a cheap webhosting plan, because the Google Code repository barfs on people regularly...) and typical Google lack of support. I know I'll need hand-holding, and with Google I'm not going to get it.

Sourceforge is the biggest one of the bunch obviously. Fairly full featured from the public exposure side, and it obviously works for many people. From using sourceforge casually (i.e. as a user not a programmer) the experience has been ok, but annoying. I don't like their default web page setup, it seems quite unintuitive to me and that's likely why everyone puts their own web pages up in front of it. Their bug reporting and built in FAQ stuff is clumsy looking too - it often seems to be poorly handled by smaller projects. And again as an enduser, their registration process for many things I've had issues with in the past. I've also heard reliability issues, although not so much lately.

To sum up, lots of little niggly negatives that I might be exaggerating. It works for so many it must be a real alternative. If there wasn't a better alternative I'm sure I could make it work.

Savannah is the GNU project host. My comments here can be summed up pretty easily. GNU projects on average don't seem to be very attractively presented - and Savannah is no exception. I dunno how well it works, but it looks pretty GNUish :-) Knowing the GNU guys, it's likely reliable and has some cool features once you grok it. Savannah seems to have some sort of approval process where they make sure you are the kind of project they want to host. Since both the projects I've been thinking about may have some dependencies that aren't 100% GNU (I'm thinking out loud here, I could be wrong) that might be an issue, and I'm not sure my plans are firmed up enough to waste some body's time reviewing them at this point. Without worrying too much about what they provide, I don't think this is a good fit.

Github is a cool looking newer alternative. Git seems to be one of the current stylish new tools right now, and github jumps right in there with both feet. Free software projects host free, and proprietary pays, which is a logical enough way of approaching things. They've got some stuff you don't see elsewhere, leveraging (I assume) the power of git, like graphing code changes and other geeky good stuff. They've got training video, which shows you have new wavish they are, and a good vibe about the place. When I threw out a twitter query last week about potential project hosts, Github got two thumbs up from the responders. On the other hand, it's all very developer centric, rather than end user centric. As a regular ole 'how do I use this thing user, it's not very friendly. There are related hubs Campfire and Lighthouse that round out their offerings, but they seem to be a little more profit looking than Github itself. I doubt my project would ever hit the point of requiring one of their for pay services, but hey I can dream can't I ? The last thing I noticed, and probably the biggest negative is how Ruby centric the whole thing appears, at least on the surface. Ruby on Rails is the biggest project they have, and I bet it's presence there is why so many other related projects ended up there. I've nothing against Ruby, it's just another language I don't know (yet?) but it shows the emphasis of the place.

Again, like Sourceforge I think I could make this work, but I don't think it's the best choice.

Trac is pretty easy for me to rule out too. First up, it's subversion, which would be yet another CVS to learn. Second up is the web interface. I don't think I've ever used a Trac based site (again as an end user) that's made a lot of sense to me. These sights seem a confused mix of wiki, faq and bug tracker, with a real blurred line between end-user and developer, and no clear indication of what you can change and what you can't. A good admin could likely herd it into line, but I don't want to get into that.

Tux Family actually looked good for a couple of minutes. There are some neat projects there, and the atmosphere seems great too. It's a moderated sign up process like Savannah. The multilingual FAQ was nice touch I thought, and the European heritage seems to shine through in places, in a good way. What took them out of contention were these little tidbits - "Tux Family is not a test platform", We will not accept "student projects that will die in a month or two" due to lack of resources. Fair enough, but that might just describe these efforts pretty well, so let's go our own ways before we even start :-)

Which brings us to Launchpad. Those of you paying attention already knew my choice since they were on the bottom of the list, so there you have it.... Oh, ok I'll explain why too.

Launchpad looks too complicated, frankly, ala Trac. On the other hand, the emphasis seems to be in all the right places. They want to promote cross project communication and support, and my projects are going to lean heavily on other the efforts of already established software. Their philosophy seem to fit into my objectives pretty well. I'm especially intrigued by the mention of translation services. Multilanguage support was high on my list of things I wanted, and I'm hopelessly unilingual, so maybe I can get some real wins there. Just wandering through the tour made me feel like it's got the right approach. I'm already an Ubuntero, and Launchpad has always performed well for me. As I've started to work my way through the Launchpad documentation, I'm finding it helpful, because it talks alot about 'why' you do things rather than 'how'. That's valuable to me, since I've got zero exposure to software project management prior to this. Launchpad itself seems to have a lot of Python bits to it, so a Python project isn't going to be out of place. Sign up was simple and straightforward, with no moderation steps.

It's not perfect - they use bazaar rather than git, and don't provide any wiki services, or even a home page past the generic one, but I think I can deal with that when I get there.

Posts like this help make Launchpad seem to be the best choice
, let's try 'er out!

(Note to self - enough with the brackets already!)

Monday, January 19, 2009

The New Blog Title

For those of you that are curious - it revolves around my New Year's Resolutions for 2009:

Be
  • constructive
  • instructive
  • inclusive
  • communicative
  • responsive
  • empathetic
  • conciliatory
  • receptive
Entropy is the enemy !!!

Of course, after I thought it up and implemented it, I found out it was hardly original... Ah well, not much on the internet is. Doesn't look like we're doing the same material :-) Hopefully this blog is more to your taste than that one.

Of course talk is cheap, and it's already a couple of weeks into the new year - have I actually done anything? Well, I started keeping up this blog again, and I signed the Ubuntu Code of Conduct. Let's see if I can keep this going!

Saturday, January 17, 2009

Choosing a Project Repository

From that mythical reader out there who is actually following all my posts up to now I can hear the question - 'Are you nuts? You claim you are going to write a program in Python, and before you learn any you're worrying about where to host the project? Delusions of grandeur or what!'

Before you completely go over to my wife's side (who is quite sure I am nuts) let me explain.

The last time I tried anything like this at all, it had two problems. Since it was a sideline hobby, it was done in fits and starts, when I had the time. As a result I was always trying to figure out where I left off, what to do next, what was already done, etc etc etc. It wasted a lot of time I didn't have just trying to keep it all straight. Once it was finally completed (well, once I quit adding features anyway) it ran fine for years, but when I wanted out from under maintaining it, there was no one around who could pick up the pieces from my scattered pile of stuff and keep it running. The new maintainer ended up re-writing it all his way and throwing out all my work. I don't begrudge that - I wasn't doing it anymore and he can't support something he doesn't grok, but it hurt none the less to have all that effort pitched.

Since those who can't understand the past are condemned to repeat it, what can I learn from this?

One - you don't write software, you build solutions. Yeah, I know, sounds like an Apple ad or something, but here's the theory. Start with a clear vision, create a solid list of what the thing is supposed to do, then write the code to do it. Document, then execute, not the other way around. I can think of lots of projects I wish did things this way.... I used to have a friend in high school that was fond of saying 'Plan the work, then work the plan' Maybe I've finally started to listen :-)

Two - This project has zero resources (see rule one) and like the last one is going to be done piecemeal at whatever computer I happen to be sitting in front of when I get 10 minutes to think about it. An internet based, distributed setup is the easiest way to keep this on track and organized. I don't have time to worry about infrastructure any more than required, I want to work on the product. There's no privacy concerns here - quite the opposite, so keeping it local doesn't make any sense.

All right, now that I've blathered on this much, it's getting to be too long a post to discuss who I actually picked. I'll do that as a separate entry.

Thursday, January 15, 2009

Running In All Directions At Once

So, since I claimed I was interested in other things lately, what the heck am I going to put on this blog? Other stuff of course :-)

Here's one topic I'll be returning to again and again. I might finally stop saying "I'm not a programmer". I can't go back to school (unless somebody's willing to foot the bill) but I can try and figure this out on my own. Lots of people do it every day. I don't expect to get really good at it, but hey - who knows?

I'm starting for that old open-source traditional reason - I've got an itch to scratch.

I do know a smidgen of bash scripting and Perl. I even did a complete website from scratch using Perl and MySQL with a bit of help from a friend. The problem I find with Perl is the same thing pointed out in one of the Perl books I read. If you don't do Perl every day and really work at, it doesn't seem to 'stick' with you that well. When I go back and look at old work, I have to really stop and look at it to figure out what's going on. To boot, the whole 'object orientated programming' paradym didn't make all that much sense to me in Perl either.

My itch is actually to contribute some code to an existing project. It does a lot of what I want, but not all of it. I've even posted my ideas to their bug tracker, but it doesn't seem like anybody else is going to do the work for me, so maybe it's time I tried to do it myself. That project is written in Python, and Python should be a good thing to learn for lots of reasons.

Python has the reputation as being easy to learn, and very versatile. Lots of modules and add ons are already out there, and it runs on Linux, Mac and Windows. I've seen lots of interesting end user programs that are written in Python, and I might be able to get something that looks good as well as working well.

So, I need to learn Python. And revision control. And project management. And lots of other things too :-)

I'll post tidbits as I go along here, and I'll start tagging these blog entries better so my mythical audience can skip the parts they don't care about.

Tuesday, January 13, 2009

Creative Commons and the General Public

I was struck at our users group meeting the other day just how 'foreign' the concept of creative commons is to many many people.

I was demonstrating some social networking stuff, facebook and this 'flickr feed'

Some of the audience members just couldn't get over the fact that people let their pictures show up in public like that. What if someone wants to steal them, or sell them for money or....

To me it's kind of obvious - it's a picture of my kids, or a piece of landscape. I don't own the rights to that piece of landscape, and if somebody else can profit on it, why not? I couldn't...

I was discussing real estate with an older cousin of mine once, and something he said has stuck with me for a long time now.
You can't own a piece of property, you just look after it for a while. It'll be here one way or another long after you and I are gone.


That's what CC is all about now isn't it....

Monday, January 12, 2009

Installing Twirl on Linux

I'd read somewhere installing Adobe Air and Twirl on linux was a real pain. I looked into it about a month ago, but the instructions at Adobe were not trivial. It was still marked as 'Beta' at that point, so it seemed reasonable, but I didn't want to spend the time at it then.

I tried gtwitter and gwibber since, but gtwitter is only twitter, and gwibber has an (upstream) webkit bug currently on Hardy that makes it unusable. gwibber does seem like the way to go on linux going forward, but I'll wait for an update there before I mess with that again.

Which brings us back to Twirl. Installing it now is pretty painless. Unfortunately it's not via a .deb, so it takes it outside the apt updates system, but it does have it's own updating system, so at least it's not completely orphaned. Personally I think any internet facing application has to have an active update system before I'm interested. The biggest security threats all seem to come from this quarter currently.

Just visit the twirl site. The big 'install now' button doesn't work, but just below it is the 'manual install' link. Download the Air installer, chmod +x it, and run it. It prompts for admin rights properly, and installs painlessly. Then visit the Twirl site again, and at the top of the page click on 'linux users use this installer.' It installs like a regular Adobe Air program, except it prompts to place the program in /opt which I thought was nice touch.

Incidentally, if you install the regular Twirl app instead of the linux version, it still works, but it doesn't dock in the panel correctly. (Ask me how I know :-)

So, all in all, it's pretty cool. It's not the click-n-drool operation it is in XP or OSX, but it's not kernel hacking either, and the program itself runs and looks exactly the same as it does elsewhere. I bet the final little niggles will get worked out eventually too. A big thumbs up to Adobe for supporting linux users as first class citizens these days - It's their support that makes it easy for me to feel comfortable recommending Air programs and avoiding things like Silverlight like the plague.

Saturday, January 10, 2009

Home Server - the recap to date

So where am I at right now?

It turns out the Xbox Media Centre is a hit. I like it, it works simply, and has been pretty reliable.

All I need for it is a file share. That same file share works well as a backup drop spot. Right now I've got an old Celeron box running Ubuntu desktop with a couple of bigger drives and LVM to join them together. Simple and effective.

I've never been happy with using a file share on the same box as a firewall, and the sucky firewall built into my modem makes we want to keep a linux box doing that job along with DHCP and DNS for the local network. I'm using SME server for that, although I might revert to IPCop or something else. I'm missing a WEP network for the kids Nintendo DS's, and someone gave me a PCI wireless card that might just do the job without me adding more equipment to the house.

Central authentication is still on the wishlist I suppose, but doesn't seem to be too important right now.

Running my own mailserver just doesn't seem worth the effort. As long as Gmail's privacy issues aren't pushing my buttons too badly, it seems like the right solution.

So, all and all, it's working ok right now.

What's caught my attention lately is python. I'd like to try and learn at least a little bit... So, expect a couple posts on that topic and others (like CVS) in future.

Thursday, January 8, 2009

Unending

Yep - another gratuitous Stargate reference.

And a change of direction.

If you haven't noticed - it's been pretty quiet around here lately. Truth is, I've been very busy, and when I'm not busy, I've been interested in other things.

Oddly enough, I've started using identi.ca, and twitter, and that's what got me interested in writing blog entries again. And while I've been kicking out some entries for a user group blog I write on, I've been wanting to express things that just don't belong there, or on my family website, and too long for twitter.

It's amusing, but my nickname 'furicle' actually is a pretty unique Google search, with identi.ca and twitter entries one and two, and this blog somewhere down the list a bit. The rest of the hits are all me too. So I'm not going to abandon this blog, just change it around a bit.

I know all the advise says if I want a successful blog to pick a narrow topic and beat it to death, but audience share isn't my definition of success anyway. This is more therapeutic than serving some desire for fame - an audience isn't really necessary. If I provide useful info from time to time that a google search picks up for future readers, that's great too.

The four rules still apply, just expect a lot more wandering around various FLOSS topics, not just home server stuff. And you still can't expect a schedule :-)

Up in a day or two is a recap of where I ended up with my home server to date.