Monday, April 30, 2007

Report: Linuxfest Northwest, Saturday

After hearing about it from a friend (specificallly, that I could get there for free courtesy of Pogo Linux), I decided to go to an actual Linux...fest... and take in the nerd atmosphere without getting some sort of otaku or gamer disease. Unfortunately, I could only go on Saturday. I would have loved to hear Brad Fitzpatrick talk about how LiveJournal scales their databases, among other things.

The bus left promptly at 8:06AM. The movie they were showing on those little screens scattered throughout the bus was the X-Files movie, which I have no intention of watching, so I caught up on some reading for one of my classes. One amusing thing that I noticed on the way up was that there was a "Lychee Buffet" restaurant at the freeway exit for the college. If you think about it, it sounds rather disturbing.

I had roughly 15 minutes between the time that the bus arrived and the first presentations. In that time, I got a bunch of CDs from the Ubuntu and Oracle tables (it was like the Oracle table was having a fire sale - I got an Oracle DVD and an Oracle Linux DVD), and some stickers from the FSF table, including the "Bad Vista" one.

The first talk that I heard was on copyright and open source, by Karl Fogel (of CVS/SVN fame). It was really interesting, given my affinity for history (especially regarding science and technology). He talked about the parallels between the era of the printing press and the present-day. I didn't realize that copyright (or proto-copyright) was created as a censorship/printing restriction tool by the official guild of printers.

The second talk I attended was on strong authentication, in particular multi-factor authentication. It was very informative, especially in regards to how those one-time password keyfobs work.

Presentation number three was about practical honeypots. I wasn't really impressed with it overall. It was rather high-level, and the presenter admitted that he had only started working on it that morning. A lot of it seemed like common sense, like being preemptive, only concentrating on exploits that are relevant to your particular systems, etc.

The last talk I observed was on scaling web services, by a lead developer from Real Networks. He reminded me of Penn from Penn & Teller. It was a very engaging talk, and it gave me a new perspective on scalability, that is, it's essentially an organizational problem, as opposed to a technological problem.

A closing thought: I would have loved to have gotten one of those stuffed SuSE lizards...it would have fit in well with the Tux I got in Canada several years ago.

Tuesday, April 17, 2007

Re: Genshi Filters for Venus; Genshi + Trac-AtomPP

This news is excellent. One of my side projects (although, it was pretty low on my list) was to figure out how to use Genshi templates in Venus. I started out by copying the Django template code/unit tests and adapted them for Genshi. However, I got stuck getting some of the unit tests to pass (_item_title and _config_context). Perhaps sometime this weekend I can see how this particular implementation works.

Speaking of Genshi, I just noticed that they had released version 0.4. Hopefully, this will help me resolve the last APE error in my Trac-AtomPP plugin — adding app:edited elements to relevant entries and sorting by that property.

While I'm thinking about it (this really seems to be turning into a stream of consciousness post), I'm not exactly sure how to page the collection efficiently, considering that Trac creates the wiki page list via a generator. Right now I'm just putting everything into one feed, but obviously that doesn't scale very well.

Sunday, April 15, 2007

HOWTO run the APE (or any jruby script) via Apache CGI

In my previous post, I was running the APE via the command line because I couldn't figure out how I could run it as a CGI in Apache. I don't really want to run Tomcat just for this, and I've had bad experiences with Tomcat administration both for school and for work (which I guess is basically the same thing at this point). So after a bout of searching the Internets, I had found a post on JRuby on Rails which helped me greatly in configuring it. So, without further ado, here's the relevant apache configuration snippet:


SetEnv JRUBY_HOME /usr/share/jruby[1]
SetEnv JAVA_HOME /usr/lib/jvm/sun-jdk-1.5[1]
# Jing dependencies
SetEnv CLASSPATH ...[2]
AddHandler cgi-script .rb
Options +ExecCGI

Notes:

  1. These values are Gentoo-specific. For JAVA_HOME, I used Java 5 as a precaution, because I wouldn't be surprised if it didn't work in version 1.4.x.
  2. On Gentoo, they put all of the third-party jars in separate directories so that their java-config utility can manage them all separately for the system and the users. So, the value I had here (which I didn't want to reproduce here because it's way too long) was the result of java-config -d -p jing. You probably don't have to put this line in if jruby can find jing by itself.

For the APE, I had to add #!/bin/bash /usr/bin/jruby to the top of it. For some reason, CGI complains if you leave out the /bin/bash part of it.

Saturday, April 14, 2007

trac-atompp progress; APE questions

I'm working on (among other things) finishing up wiki support in my trac-atompp plugin. I'm nearly done, I think. In order to make sure it's "valid", I'm using Tim Bray's APE (albeit from CVS). However, I've got a few questions about some of the errors:

  1. ! 53 of 53 entries in Page 1 of Entry collection lack app:date elements.

From the source, it looks like it should actually say app:edited. But, why is it giving an error? According to draft 14, section 10.2, Atom Entry elements in Collection documents SHOULD contain one "app:edited" element, and MUST NOT contain more than one. Perhaps the messages should conform to RFC 2119 instead of lumping in all of the SHOULDs with the MUSTs, or something.

  1. ? Can't update new entry with PUT: No Content [Dialog]
  2. ! Couldn't delete the entry that was posted: No Content [Dialog]

I don't really understand why HTTP status code 204 (No Content) isn't allowed for either PUT or DELETE, seeing as RFC 2616 says that it is a perfectly valid response for both actions.

Thursday, April 12, 2007

HOWTO restrict ssh access by IP and user

There's a way to restrict access to a user account or set of user accounts via PAM (and by extension, SSH)—the obviously named pam_access module. It's available on Gentoo Linux in sys-libs/pam, and on Debian Linux (and I assume the derivatives) in libpam-modules.

In order to enable this module for SSH, you have to edit the SSH's PAM file (Gentoo: /etc/pam.d/sshd; Debian: /etc/pam.d/ssh) to enable the access module: account required pam_access.so

There's some pretty good documentation in /etc/security/access.conf (at least, in the default distribution of it) on how to configure the file, but one thing that it doesn't say explicitly is that you can use IP address blocks in CIDR notation to denote access privileges. For instance, if I wanted to limit bob to the local network (192.168.0.*) and the VPN (172.16.*). The configuration line for that would be:

-:ALL EXCEPT bob:192.168.0.0/24 172.16.0.0/16

Wednesday, April 11, 2007

Re: Protecting a JavaScript Service

In How to Protect a JSON or Javascript Service, Joe Walker looks at a few solutions such as:

  1. Use a Secret in the Request
  2. Force pre-eval() Processing
  3. Force POST requests

The last time that I worked on an JSON-based web application, I did number 1, sort of. I basically implemented a simplified version of HTTP digest authentication in order to send a username and password to the server. In order to accomplish this, I used an nonce plus a JavaScript implementation of the SHA-1 hash function.

If I were to reimplement the user authentication portion today, I would probably use this "clipperz" library that I also found on Ajaxian. I'm amazed that someone has implemented AES in JavaScript. I would think that it would be difficult, although I haven't read the specification for it. Maybe one of these days I'll implement the Diffie-Hellman key exchange, if I get bored enough or I need it for something.