Tuesday, July 24, 2007

xmingw overlay and "competing" software

Via the Freshmeat feed, I just noticed this thing called the "MinGW cross compiling environment" that looks like it does more or less the same thing as my xmingw overlay, except that it's not distro-specific. (It's not my fault that the portage concept is awesome.) I took one look at the mercurial repository, and found that it consisted of exactly one shell script. Wow. It's a big one, too. It currently supports 26 packages and is roughly 2000 lines long. My overlay, on the other hand, supports over 200 packages (with at least one shell script per package), and I probably don't want to know how many lines of code that is. I'm surprised that the number is over 200. Well, sort of. Its Bazaar repository is currently on revision 466, and apparently I've had this branch for a little over a year. That latter part's news to me.

In related news, as I type, I'm building Firefox via xmingw. The Windows XP machine that I use is going loco, and many programs (including Firefox) crash when I try to use them. Strangely enough, my self-compiled version of Pidgin runs just fine. So, in that spirit, I'm trying to see whether a self-compiled version of Firefox will do the trick as well.

Edit: I was going to mention with regards to the Mozilla (including NSPR and Firefox) build process that they have a upside-down perspective of what "build" and "host" means, in comparison to 99% of the other autoconf-based projects out there. Usually, the "host" is the platform that one is compiling for, and "target" is the platform on which the compilation is taking place. Yes, it does make sense the way that Mozilla is doing it, but it's the opposite of everyone else, which makes it annoying to build. Well, that reason and the fact that it is a horribly monolithic build process (see OpenOffice.org, imake-powered X build process)

Wednesday, July 18, 2007

Re: GNOME Online Desktop

I just finished looking at the slides from the GUADEC presentation on the GNOME Online Desktop and the associated screencasts. The concept of installing software from a browser like that (given that I have some idea of how it works) is ridiculously awesome. More importantly, I would like to see how they design the following:

an HTTP library that shares cache and cookies with the browser, and supports asynchronous operation with the GLib main loop out of the box

OK, so right now, Gtk+ and friends currently have libsoup, which fulfills the latter requirement. The former requirement seems to me to be much more complex. First, do you require compatibility with multiple browsers (complexity becoming O(N*M) for varying N and where M is the number orf browsers to support), and if so, what do you do with browsers which seemingly don't provide an API? (I bring this up because I have no idea whether Opera provides one.) Now, imagine that they chose only the Gecko/XULRunner libraries to be compatible with. That API is always changing, so does this mean that the resulting library will also be unstable?

As an aside, I wonder whether this will also be available for Windows. Of course, this assumes that D-Bus will be available for Windows at some point in the future.

Thursday, July 12, 2007

Website finally updated, hooray!

After years of malnourishment and two weeks of development, my little old static website (now using a smaller domain name!) is live. The old website, like the new website, was created via a templating system. However, the former website's templating system was homegrown using PHP4 classes (disgusting, I know...but that's all I could use at the time). Even more disgusting about my system was that it was HTML comment directives plus a regular expression parser. I was so young and naïve, and I hadn't taken a compilers class yet. So this time around, I said "screw it" and went with a) my favorite language, Python, and b) the template software that I had been using for my Trac-AtomPub plugin (yes, not -atompp anymore, per the lengthy discussion on the atom-protocol mailing list).

The Journey

As the new website was a chance to experiment with new things, I decided to take the plunge and use HTML5 to markup my website. And with any sort of experimental technology, there were many problems.

First, I tried to use the genshihtml5 plugin, but strangely enough the code was a bit buggy (e.g., it was missing an import), and I could never figure out how to get it to output proper HTML5 while still removing end tags from tags which don't need them, e.g. <link/>, while retaining them for tags which require one, e.g., <script/>.

Next, I tried to use html5lib's Genshi-Stream-based tree walker. For some reason, it simply would not output any data. I don't remember all of the details, but I do remember inserting a lot of print statements in html5lib to see if I could find the bad piece.

Finally, I gave up and made Genshi just output XHTML plus the extra HTML5 tags. I figured that all of the debugging trouble simply wasn't worth it for the timeframe I had envisioned.

(As an aside, I do plan on submitting the patches that I've made as a result of this...exercise (for lack of a better word) so that they can be integrated in future releases of the respective software.)

Actual usage of new HTML5 tags was...interesting to debug. If you're writing HTML5 and not XHTML5, and you're viewing the page in Firefox, this is what the DOM tree looks like (according to Firebug):

<figure _moz-userdefined="" />
<img src="..." alt="..." />

For comparison, this is what it looks like when rendered as XHTML5:

<img src="..." alt="..." />

That completely broke my CSS files, as I was using child/descendant rules utilizing the new tags. This sort of thing is why I love using Firebug.


I've really only thoroughly tested this website on Firefox 2.x (Windows & Linux). I just checked it on Opera 9.20 (Linux) and a relatively old development version of Gtk-Webcore (AKA WebKit), and the only bug that I see (in both of them, strangely) is some sort of CSS error in calculating the spacing for the <dd/> box for "Special Skills" in my CV.


Future plans include packing both the CSS and the JavaScript, via csstidy and packer, respectively. Right now there are several bugs with regards to integrating the two applications with my build system. csstidy interprets white-space values incorrectly, particularly the vendor-specific values. I'm currently trying to integrate packer via this nifty little python module that uses ctypes to create an interface with Mozilla's Spidermonkey JavaScript engine. Unfortunately, there's a recursive reference somewhere in base2, and the module is choking on it, so I have to figure out how to resolve that (if possible). Another future plan involves making the site fully dynamic in that the page layout stays the same, while background XMLHttpRequests retrieve the page contents when internal links are clicked. Obviously the current behavior would be retained as a fallback.

Anyhow, there are more details about how I made my website on the colophon. Bug reports, suggestions and feature requests are welcome!