Archive for the ‘Computers’ Category

WebGL

June 14, 2011

I’ve been learning about WebGL recently, not that I have any particular reason to use it. My first experiments, selected mostly because they look neat, are here:

  • A Julia setA Hopf fibration viewer. The Hopf fibration is interesting and important mathematically—it’s part of the reason homotopy theory turns out to be so much more complicated than homology, for example—but its real importance to me is that it makes for great pretty pictures.
  • AMandelbrot set “explorer.” That’s the first thing everyone does when they learn about shaders, right?
  • A Julia set explorer. It turned out far trippier than I had hoped for.

I’m comfortably certain these are not models of good WebGL practice. (Or good html/javascript practice, for that matter).

So what have I learned? For starters, half the people I’ve tried to show these things to can’t run them, for whatever old-browser and old-graphics card/driver reasons. How long will it be before you can reasonably assume any random user is likely to be able to use this stuff? I suppose it does make sense to learn it now so as to be ready in five years when it’s generally supported. Or maybe it’s that the primary audience is gamers, happy to have an excuse to buy a new graphics monstrocard every few months.

As for the thing itself, it’s interesting comparing it to what little I remember from the days when I knew OpenGL. From a high level it’s pretty much what you’d expect from a translation of OpenGL, or a stripped-down version of it, into javascript, the most obvious difference being that you have to provide your own shaders. I’m rather glad to have to reason to learn about shaders, really; they’re new since my day.

I haven’t figured out all the nuances of GLSL. As an example of the sort of thing I’ve run into, the Mandelbrot fragment shader has a big loop to count iterations. for-loops in GLSL must be of a form like

    for(int i=0; i<CONST; i++) {

where CONST is some actual constant—I assume, possibly wrongly, that that’s so loops can be implemented by unrolling. That I learned quickly enough. Where I ran into problems was figuring out what the CONST could be. Some machines, at least older ones, seem to have a cutoff of 255, and behave oddly (it doesn’t look like a simple mod, but I haven’t tried to figure it out) if the bound exceeds that. The GLSL spec (which appears to be somewhat out of sync with what WebGL as implemented uses; am I looking in the wrong place, or otherwise missing something?) wasn’t much help there.

Back to WebGL vs. OpenGL in general, the other immediate difference is that there’s no more glBegin/glEnd: you have to do everything with buffers. That seems to add to the boilerplate. And of course a lot of the familiar OpenGL and glu methods for things like matrix handling are missing, so you have to provide them yourself. Or find a library that does them all. I don’t think I particularly like either of the ones I’ve seen, but haven’t really thought about them much yet. I can see performance being an issue with getting libraries right.

And finally it’s a bit annoying that WebGL only knows from floats, not doubles. That rather surprises me, but I don’t know enough about this stuff to rant without making a fool of myself.

As long as I can stick to my machine, a relatively beefy Macbook Pro, I’m impressed by how well this stuff works, despite the whingeing above. I haven’t done any real stress tests, but what I have done seems to work well and quickly. As expected, both chrome and Firefox support it. Interestingly, the Firefox implementation seems to be noticably more performant than Chrome. During animations Chrome seems to seize up (garbage collecting?) every couple of seconds. Firefox is nice and smooth.

Advertisements

Chrome, continued

September 6, 2008

I just tried to edit a Google Groups page using Google Chrome, and I see:

Page editing not supported in your web browser. Download a new copy of Firefox or Internet Explorer to edit pages.

LOLZ!

Apart from that, I continue to be pretty impressed with Chrome. A few more observations:

  • I’ve seen a few crashes, and Google’s claims about sandboxing the tabs and apps seems mostly true:  a crash in one tab or app doesn’t usually bring down the whole browser.
  • I like the way that there’s no status bar taking up space all the time; addresses appear over the lower left corner of the window when you mouse over links, rather than in a dedicated space.
  • In-page search is incremental (as with Firefox), but has a few nice features: the browser tells you how many matches it found, and highlights both a “current” match and all the others, in different colors. It also shows you in the scrollbar area where the matches are, although it doesn’t let you click the indicators to go there immediately. I also like the location (upper-right) of the search window, I suppose because both my attention and the cursor are more often near the top than the bottom of the page.
  • Chrome DOES work with Java, but requires JRE 6 update 10, now in beta. Perhaps they needed the experimental “serialize user’s soul” feature…
  • While I wish the javascript debugger were friendlier—one thing that has annoyed me repeatedly is not being able to click on an error to go to the source; I’m told I should open the file in the inspector, but I seem not to be able to do that—but the DOM/css explorer is nice. Apparently webkit has a fuller version, but AFAIK it doesn’t work with Chrome (only Safari).
  • For some reason Chrome installs itself in your local app settings directory, not in \program files. I suppose there must be some reason for that…
  • The Application Shortcuts are surprisingly nifty for such a simple feature: all they really do is remove the browser ui. I have to assume that this is a harbinger of things to come. Chrome will become Google’s general application framework, a central part of their plan for world domination. Googlezon approacheth!

Chrome

September 2, 2008

google-chrome1 As the entire nerd world knows, today Google released a beta of its new browser, Chrome. I’m not really sure why they’re bothering; like everyone else, I’m vaguely assuming it’s something to do with to competing with Microsoft IE8 and its pornGoogle-blocker. Like others, I suspect it’s more likely to take market share from the more virtuous Firefox. The rest of the world can speculate more productively than I about what’s going on between Google and Mozilla.

I’ve been using Chrome for entire minutes now, and so far it seems pretty nice. It’s not enormously different from the rest of the world’s browsers, but then, other than generally behaving better what is there to do? And behave better it claims to do: read the comic book(!) to find out how (summary: multi-processing! one process per tab, basically).

The browser’s look and feel is Google-like, simple and streamlined—despite the name, it’s really not all that shiny, and I mean that in a good way. There’s no distracting flash cluttering up the screen; pretty much everything you see is functional.

Like IE7, Chrome has no menu bar; unlike IE7, it seems to have no way to get one. Which is probably just as well—if the browser is well-designed for menulessness you shouldn’t need them, and they can only clutter things up. The tabs are outside the rest of the app, giving a feel of “application inside tab” rather than “tab inside application,” which frankly doesn’t matter much to You the User, but which does presumably reflect the architecture of the browser. Having tabs on top does give the app a slightly different feel from most apps, especially with Windows Vista Aero.

The address bar does more or less what Firefox’s does, as far as I can tell. One nice feature is that the hostname is bolded, so it stands out from the rest of the URL, which for long crufty URLs is nice. There’s no search box, I suppose because the address bar is supposed to do all the searching you need. I’m not quite sure that’s true, as the dynamic autocompletion hints aren’t what they are in a real searchbox; they’re mixed with address history, I suppose.

To its credit it does seem very fast, both opening from scratch and opening new tabs. That’s one of my very few major beefs with Firefox; it takes so darned long to start the first time. IE starts relatively quickly—I assume that’s because most of its entrails are wrapped around the OS, and hence already loaded—but opens tabs infuriatingly slowly. Firefox can also hang waiting for applications, which Chrome claims not to do; but I can’t really judge that yet.

The controls and options and settings are all simple, which is excellent. But of course the downside is that there are settings that just aren’t there. They’re mostly minor things—new tabs open next to the tab you’re in, while I’d like them to open at the far right; I’d like to make the controls a little smaller to save screen space (both settable in FF)—but they’re on the “need more features” side of the too-simple/too-complex spectrum. IE, being a Microsoft product, pegs the too-complex needle, and not for any discernable reason; MS’s Internet settings dialog is an Abomination Before the Lord. Firefox does an excellent job of presenting options, and indeed Google’s options box seems to be copying Firefox’s philosophy, just without so much there yet. I do see some tiny niggly UI flaws—there’s no ellipsis after “Options” in the tools menu, for example. I’m a stickler for other people getting fiddly details right.

I think what I miss most from Firefox is Adblock Plus (you can argue with me about the propriety of adblockers some other time; the quick version of my position is that I don’t mind ads per se—in fact I rather like good ones—but I can’t abide all the animation and seizure-inducing flashing, and most repugnant of all the noise, of so many ads). Chrome will of course accumulate plugins, but that might be one Google doesn’t really want to accumulate.

Some other problems. Chrome does not seem to run Java(!) I assume that will be fixed during the beta (but when you ASSUME you make an ASS of U and ME…). There’s no Print Preview (haven’t tried printing yet; that’s something browsers generally don’t do well). It doesn’t render unicode characters outside the Basic Multilingual Plane properly (see this for a simple example). (I filed a bug about that one, using the handy built-in bug reporter.) I’m sure there are more that I’ll run into. Or that I’ve already forgotten about.

Hm, what else? There’s gdb-like javascript debugger, not as nice as Firebug or Dragonfly (unless I’m missing something), but for what it does my initial impression is that it looks pretty solid. The individual-tab task manager and memory stats are neat, but I don’t know if I really have much use for them.

Well, that’s more than enough for now. My plan is to use both Chrome and my beloved Firefox for a while, and see if I get sick of Chrome or decide I can’t live without it.

A Javascript bug in NBC’s Olympics Website

August 17, 2008

Hey, I found a bug! The schedules in NBC’s Olympics website are supposed to be displayable in either Beijing time or your local time. This works only in IE7—so Firefox-using me ran into it the night trying to find out when Michael Phelps would win his last medal.

Here’s the problem:


clientTime:function()
{
    $('tzcClient').show();
    $('tzcLocal').hide();
    var mts = document.getElementsByClassName ( 'timeConvertible' );
    mts.each(function(mt) {
        if(mt.readAttribute( 'title' ) != null && mt.readAttribute( 'title' ).length > 0)
        {
            // etc

The error is in line 6: “mts.each is not a function.”

What’s happening here? The website uses the prototype js library, which provides many nice features (although I’ve decided I prefer jQuery; I’ve been meaning to write about that for a while now). The js developer here didn’t read the prototype documentation cautioning against getElementsByClassName. In Firefox (and Opera), that is a native function, but in IE7 it’s not, so prototype defines it. And prototype defines a more useful version, returning a prototype Array object rather than a native unmunged array. That prototype Array has the “each” function; the native one doesn’t. Firefox and Opera’s superior js implementation leads to a worse result.

Oh noes! I’m a girl!

August 8, 2008

According to this, anyway:

Likelihood of you being FEMALE is 85%
Likelihood of you being MALE is 15%

I have no idea where the male-female site stats come ultimately come from, but they’re kind of interesting. Here’s a few:

Site

Male-Female Ratio

youtube.com

1

amazon.com

0.9

imdb.com

1.06

wordpress.com

0.98

att.com

0.83

netflix.com

0.79

expedia.com

0.82

real.com

0.85

southwest.com

0.77

slate.com

1.11

nwa.com

0.82

theonion.com

1.2

thesuperficial.com

1.22

theatlantic.com

1.2

charter.com

0.8

dailykos.com

1.56

Apparently we men let the women do the shopping, household administrivia, and travel planning while we read political commentary, celebrity gossip, and the Onion.

The results vary wildly from machine to machine (or rather browser history to browser history)—I’m a boy on at least one of my other machines.

Not, by the way, that there’s anything wrong with being a girl! I just never thought of myself as one. Sensitive New Age Guy, maybe.

Minimalism

June 22, 2008

I’ve been trying to digest the minimalist redesign of James Bennett‘s and Ryan Tomayko‘s blogs. On the one hand, wow, cool, I’m all for ruthless simplification and streamlining. On the other, I sort of feel like I’m looking at a web version of a Dogme 95 movie, adhering to the rigorous asceticism of a borderline-sadistic Lars von Trier, if Lars von Trier were a web designer.

First, what I like. God knows most web pages (including my own; I at least of the excuse of being a talentless amateur) are far too crowded and busy and horrid. Even discounting the worst of the web—epilepsy-inducing flashing ads (because of which I use an ad-blocker) and the horrors of MySpace (is it really possible to actually read one of those sites?)—there is just too much crap on the typical page. Streamlining is good, both aesthetically and functionally.

But.

There is a point to a certain amount of “administrative debris.” Like everything else, the web does have certain standards, even if they’re vague and flaky and stubbornly anarchic. When I land somewhere I expect to be able to figure out where I am—I expect there to be some sort of overall site title up at the top of the page somewhere. Breadcrumbs are nice; they give me a nice feeling of being able to figure out where I am. On a blog, I expect a certain amount of familiar navigational administrivia to allow easy poking about. Landing on James and Ryan’s blogs I didn’t see those things; I was confused for a while. It wasn’t even completely obvious I was looking at blogs.  After having looked at them for a while I’m still a bit uneasy, even as I’m very impressed. I feel a little like I’m visiting an exquisite Mies van der Rohe building, but don’t know how to ask where the bathroom is.

Ryan and James’ basic philosophy, borrowed from the great Edward Tufte, is

The idea is that the content is the interface, the information is the interface – not computer administrative debris.

Anything that isn’t conveying some sort of actual information is ruthlessly excised. There is nothing left to take valuable screen real estate and attention away from the content.

Now that is a noble goal. One of the great things about the web is that hypertext really does allow a pretty close approximation of that, if only web designers can be persuaded to use it (read what RT has to say on the subject). But I’m not persuaded that taking the philosophy to its extreme is a good idea. Not everything you might want in an interface has an obvious analog in the content. Shoehorning links into not-quite-obvious places can lead, if not to outright confusion, to a certain loss of clarity.

Take for example, the approach both Ryan and James take to a “home” link. Rather than using the relatively contentless word “Home,” or even a title, they link through their names. But that’s not what I expect. I expect the name to take me to some “about” page, maybe with some amount of information about the author, but probably not to the familiar place that tells me what the point of this site is and what all might be there—which is what it actually does.

Another problem with this strict minimalism in an environment like the web, in which there just aren’t a lot of resources (e.g. there’s only half a dozen workable fonts you can rely on all your users having), it’s difficult to differentiate your site. I’m not talking about the aesthetic outrages marketers commit in the name of “branding,” I’m talking about simple identification. When I’m web-surfing I want to be able to figure out where I am without thinking about it.

Minimalist web design also has other pitfalls. Given nothing but typography to work with, a minimalist designer really has to get the typography right—and that is difficult. James Bennett’s site, for example, has overlong lines. See also some intelligent discussion of typography in his afore-cited post and its comments.

I seem to have written one positive paragraph and six negative. That is not at all an accurate reflection of my actual reaction! I just had a lot more to say about what I didn’t like than about what I did. If by some miracle the web design world would move in their direction life would be better (and if some of their practices became more standard some of my objections about being confused would go away). Really I’m impressed, and am trying to absorb some good practices from these. But I’ll certainly never be as rigorous as they are. Were I a real designer I’d go for a more softcore approach. Some examples of things I like:

  • The atheist website I’ve mentioned before—although really it could use some paring, and its lines are too long.
  • Anticlown Media’s About page. They’re the geniuses behind The Superficial, I Watch Stuff, and Geekologie (decidedly non-minimal sites, but I like them anyway).
  • Some, but by no means all, of the “examples of great web typography” in I Love Typography, here and here.

James Bennett quotes Antoine de Saint Exupéry rather disapprovingly:

In any sort of discussion of minimal or minimalistic design a certain quote, attributed to Antoine de Saint Exupéry, is inevitably bandied about:

A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away.

At first this seems like a brilliant insight into the heart of the design process, but it really turns out to be bullshit. because the point at which there is nothing left to take away is the point at which there is nothing left, period.

A commenter helpfully provides the quote in context:

And now, having spoken of the men born of the pilot’s craft, I shall say something about the tool with which they work, the airplane Have you ever looked at a modern airplane? Have you followed from year to year the evolution of its lines? Have you ever thought, not only about the airplane, but about whatever man builds, that all of man’s industrial efforts, all his computations and calculations, all the nights spent over working draughts and blueprints, invariably culminate in the production of a thing whose sole and guiding principle is the ultimate principle of simplicity?

It is as if there were a natural law which ordained that to achieve this end, to refine the curve of a piece of furniture, or a ship’s keel, or the fuselage of an airplane, until gradually it partakes of the elementary purity of the curve of a human breast or shoulder, there must be the experimentation of several generations of craftsmen. In anything at all, perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away, when a body has been stripped down to its nakedness.

In context, there’s something to this, albeit not necessarily immediately applicable to web design. “There must be the experimentation of several generations of craftsmen,” he says—“stripping a body down to its nakedness” is a laborious process. The simplicity he’s talking about is not merely a matter of stripping things away, but of finding out what is truly necessary, of getting as close as we can to the Platonic Form of whatever we’re doing. Attaining simplicity is not simple.

Debugging JavaScript, now in Opera

June 5, 2008

JavaScript is a neat little language, but (like countless others, apparently) I find it a real pain to debug. Part of my problem is me. I’m relatively new to JavaScript, which isn’t yet as embedded in my brain as, say, C++ is. I still suck as a JavaScript developer. But I refuse to take all the blame: part of the problem is the tools. Firebug and Firefox’s JavaScript Debugger (a.k.a Venkman) are both useful, but both are buggy and annoying—for some reason I have a terrible time with both getting breakpoints to work reliably. And the less said about the Microsoft Script Debugger the better.

So last week I downloaded a beta of Opera 9.5 which includes an alpha of Dragonfly, Opera’s new suite developer tools (I have no idea why they called it an “alpha”; possibly they wanted to sow confusion to frighten away the rabble). And so far I’m pleased. It does have a few bugs—resizing the source window doesn’t immediately redisplay correctly, expanding/collapsing/expanding objects in the frame inspection window doesn’t work—but nothing major. It’s also missing some fairly basic features, or has hidden them fairly effectively—there’s no watch window, and no way to display all breakpoints(!). But overall I’ve found it very nice.

It’s also got me using Opera more generally. I still prefer Firefox, for reasons that I’ll try to enumerate at some point, but Opera certainly has its merits.

A little Django flatpage trick

May 31, 2008

For each general area of my humble little website, I have a base template that takes care of a links bar and breadcrumbs and things. All the individual pages in that area extend that base.html, which in turn extends parent base templates. Nothing unusual there, and all well and good.

Except for flatpages. Flatpages are great, but out of the box they’re a tiny bit rigid. I want a flatpage in a particular area to extend the right base.html, but that’s not quite what the flatpage template setting provides—that’s a whole template, and I just want a base to extend. One could of course have a different flatpage template for each base.html, but that’s gross, and not very DRY.

Maybe there is some simple obvious way to do this built in to Django, but I couldn’t find it. My solution is to extract the appropriate base template from the flatpage itself. I put it in a django comment, which will get stripped out of the content before rendering. So the line

{# Base utilities/base.html #}

goes in the body of the flatpage, and gets extracted with a filter. [It might be better to infer it from the page’s url, if one’s directory structure and url structure always match.] The only problem is that I need the filter loaded before the extends tag, and the extends tag needs to come before anything else. It used to be possible to load before extending, but that was an evil (if useful) loophole, now closed.

Like all problems in computer science, this can be solved with another level of indirection. flatpages/default.html loads the filter and extracts the base template name, and then includes another template to do the actual rendering.

Here’s the code, simple and completely non-robust though it is. In a tempatetags/flatpage_utils.py or whatever you want to call it:

@register.filter
@stringfilter
def stripdjangocomments(text):
    """
    Strip django comments from the text.
    """
    s = re.sub(r'{#.*?#}', '', text)
    return s

@register.filter
@stringfilter
def getbase(text, default = "base.html"):
    """
    Look for a string of the form {# Base foo #} and return foo
    """
    m = re.search(r'{#\s*Base\s*(\S*?)\s*#}', text)
    if m and m.groups()[0]:
        return m.groups()[0]
    else:
        return default

In templates/flatpages/default.html

{% load flatpage_utils %}

{% with flatpage.content|getbase as pagebase %}
{% include "flatpages/flatpagebody.html" %}
{% endwith %}

And in templates/flatpages/flatpagebody.html

{% extends pagebase %}
{% load whatever_else %}

{% block title %}
{{ flatpage.title }}
{% endblock %}

{# maybe other stuff #}

{% block content %}
{# add more filters if you like #}
{{ flatpage.content|stripdjangocomments }}
{% endblock %}

And that’s it.

Unicode, Browsers, Python, and Kvetching

May 28, 2008

My HTML/unicode character utility is now in a reasonably usable state. I ended up devoting rather more effort to it than I had originally planned, especially given that there are other perfectly useful such things out there. But once you start tweaking, it’s hard to stop. There are now many wonderful subtleties there that no one but me will ever notice.

What gave me the most grief was handling characters outside the Basic Multilingual Plane, i.e. those with codes above 0xFFFF. That’s hardly surprising. And I suppose it shouldn’t be surprising that browsers handle them so inconsistently. All the four major browsers try to display BMP characters using whatever fonts are installed, but not so for the higher ones. In detail:

  • Firefox makes a valiant effort to display them, using whatever installed fonts it can find. It’s fairly inconsistent about which ones it uses, though.
  • IE7 and Opera make no effort to find fonts with the appropriate characters. They do work if you specify an appropriate font.
  • Safari (on Windows) doesn’t display them even if you specify a font. This does not further endear Safari to me.

Oh, and on a couple of XP machines I had to reinstall Cambria Math (really useful for, you know, math) to get the browsers to find it. There must be something odd about how the Office 2007 compatibility pack installed its fonts the first time (I assume that’s how they got there).

On the server side, I knew I would have to do some surrogate-pair processing myself, and that didn’t bother me. Finding character names and the like was more annoying. I was delighted with python’s unicodedata library until I started trying to get the supplementary planes to work. The library restricts itself to the BMP, presumably because python unicode strings have 16-bit characters. The reason for the restriction is somewhat obscure to me—the library’s functions could presumably work either with single characters or surrogate pairs; and I’m pretty sure all the data is actually there (the \N{} for string literals works for supplementary-plane characters, for example).

The whole unicode range ought to work in wide builds of python, but I have no idea if that would work with Django and apache/mod_python and Webfaction, and I’m far too lazy to try. So I processed the raw unicode data into my own half-assed extended unicode library, basically just a ginormous dict with a couple of functions to extract what I want (so far just names, categories and things to come if I ever get around to it).

Some AJAX in Django

May 11, 2008

Months ago I started looking into doing AJAXy things within Django, and (typically for me) never actually did any of them. Finally I’ve started looking at that again. My needs are simple and dull: I just wanted quick and seamless responses to changes in form data in the little utilities I just added to my website.

Now I know very little about Ruby on Rails, but some of what I’ve seen if does look kinda cool. In particular I liked the respond_to gadget, which switches on requested mimetypes to figure out what response to send from a view (or action, or whatever they’re called on Rails). That seems to allow nice code factoring with minimal syntax, in a way that’s concise and clever (typical for Ruby) and clear (not so typical, IMO…).

I’m not convinced this is a truly great idea, for reasons I’ll detail below, but what the hey, when did that ever stop anyone? So I hacked up a python/Django analogue (see the end of the post). I may have course have misunderstood completely what’s up with the Ruby thing, in which case, oh well.

Here’s an example of how you use this thing—a Responder object—in a view:

def index(request):
    data = { 'foo' : 'bar', 'this' : 'that' }
    responder = Responder(request, 'template', data)
            { 'raw' : raw, 'types' : types })

    responder.html

    responder.js

    return responder.response()

This says, more or less “If the request wants HTML, render the data with the template template.html. If it wants javascript, render with the template template.js and the javascript mimetype.” That is, it’s something like

def index(request):
    data = { 'foo' : 'bar', 'this' : 'that' }

    if <wants html>:
        return render_to_response('template.html', data)

    if <wants js>:
        return render_to_response('template.js', data,
            mimetype='text/javascript' )

    return responder.response()

[where that <wants html/javascript> conceals some complexity…]

The render-a-template behavior can be overridden: those hacky ‘html’ and ‘js’ attributes are callable. If one of them is passed a function, it calls it: if the function returns something, that something is used as the response. It can also modify data and return None to proceed with default handling. Here’s an example I used when testing this stuff on different browsers. It prints the contents of the HTTP_ACCEPT header, and provides a button to fire an ajax request to replace that. In this case I built the javascript messily by hand.

def index(request):
    raw = request.META['HTTP_ACCEPT']
    types = parseAccept(request)

    responder = Responder(request, 'index.html',
            { 'raw' : raw, 'types' : types })

    responder.html

    @responder.js
    def jsresp(*args, **kwargs):
        text = raw + '<br><br>' + \
            '<br>'.join('%s %g' %(s, q) for s,q in types)
        js = "$('content').update('%s');" % text
        return HttpResponse(js, mimetype='text/javascript')

    return responder.response()

Here’s the corresponding template (which uses prototype):

<script src="/static/js/scriptaculous-js-1.8.1/lib/prototype.js" type="text/javascript"></script>
<script type="text/javascript">
    function ajaxUpdate () {
        headers = { Accept : 'text/javascript;q=1, */*;q=0.1' };
        if(Prototype.Browser.Opera)
        {
            headers.Accept += ',opera/hack'
        }

        new Ajax.Request('/',
            { method:'get', parameters: {}, requestHeaders : headers } );
    };
</script>

<div id="content">
    {{ raw }}<br><br>
    {% for s in types %}
    {{ s.0 }} {{ s.1 }}<br>
    {% endfor %}
</div>
<div>
    <br>
    <input id="b1" onclick="ajaxUpdate();" type="button" value="Click Me!">
    </input>
</div>

So What Have I Learned From This? Well, it all seems to work, so I’ll keep using it. But I’m not totally sold that this—switching on HTTP_ACCEPT, and my own particular implementation—is the Right Way to do things.

Philosophically, the general idea seems awfully prone to abuse. As I understand RESTful web services (i.e. not very well), different requests correspond to different representations of the same underlying data. But are the original html and the javascript that updates it really different representations of the same thing, or different animals altogether? I think that’s a murky point, at best. And I should think that in real life situations it could get messy. What happens, for example, if there is more than one sort of javascript request (e.g. if there are different forms on a page that do fundamentally different things)?

Rails and REST fans, please set me straight here!

Practically, the HTTP_ACCEPT thing seems delicate. I had to futz around a bit to get it to work in a way I felt at all confident of. Browsers seem to have different opinions about what they should ask for. Oddly, the browser that caused me the most problems was Opera—despite what I told prototype’s AJAX request, Opera insisted on concatenating the ACCEPTed mimetypes with the original request’s mimetypes. I hacked around that by throwing in a fake mimetype to separate the requests I wanted from those Opera wants; see the template above and the code below.

So anyway, maybe it would be better, or at least more Django, to be explicit about these AJAX requests, and either give them different URLs (and factor common code out of the various views) or add a piece of get/post data, as here. For now I’ll keep doing what I’m doing, and see if I run into problems.

Here’s the Responder code. It has numerous shortcomings, so use at your own risk. It is completely non-bulletproof (and non-debugged), and won’t work if you don’t use it just like I wanted to use it (e.g. you’d better give it a template name). It obviously needs more mimetype knowledge—it falls back on python’s mimetype library, but that seems seriously unacceptable here. And I’m very lame about how I parse the HTTP_ACCEPT strings.

import sys
import re
import os, os.path, mimetypes
import django
from django.http import HttpResponse
from django.shortcuts import render_to_response
from django.template import RequestContext

class _ResponseHelper(object):
    def __init__(self, ext, mimetypes, responder):
        self.responder = responder
        self.ext = ext
        self.mimetypes = mimetypes
        self.fn = None

    def __call__(self, fn=None):
        self.fn = fn
        return fn

class Responder(object):
    """
    Utility for 'RESTful' responses based on requested mimitypes,
    in the request's HTTP_ACCEPT field, a la Reils' respond_to.

    To use, create a responder object.  Pass it the request object
    and the same arguments you would pass to render_to_response.
    Omit the file extension from the template name---it will be added
    automatically.
    For each type to be responded to, reference an attribute of the
    appropriate name (html, js, etc).
    Call the respond function to create a response.
    The response will be created by appending the extension to the filename
    and rendering to response, with the appropriate mimetype.

    To override the default behavior for a given type, treat its
    attribute as a function, and pass a function to it.
    It will be called with the same arguments as the Responder's constructor.
    If the function can modify the passed data, and either return None
    (in which case the template handling proceeds), or return a response.
    Function decorater syntax is a convenient way to do this.

    Example:

        responder = Responder(request, 'mytemplate', { 'foo': 'bar' })

        responder.html

        @responder.json
        def jsonresp(request, templ, data):
            data['foo' : 'baz']

        @responder.js
        def jsresp(request, templ, data):
            return HttpResponse(someJavascript,
                mimetype='application/javascript')

        return responder.response()

    Here an html request is processed as usual.
    A JSON request is processed with changed data.
    A JS request has its own response.

    """
    types = { 'html' : ('text/html',),
              'js' : ('text/javascript',
                      'application/javascript',
                      'application/x-javascript'),
              'json' : ('application/json',),
            }

    def __init__(self, request, *args, **kwargs):
        self.request = request
        self.resp = None
        self.args = [a for a in args]
        self.kwargs = kwargs
        self.priorities = {}
        for t, q in parseAccept(request):
            self.priorities.setdefault(t, q)
        self.defq = self.priorities.get('*/*', 0.0)
        self.bestq = 0.0

    def maybeadd(self, resp):
        try:
            thisq = self.bestq
            for mt in resp.mimetypes:
                q = self.priorities.get(mt, self.defq)
                if q > thisq:
                    resp.mimetype = mt
                    self.resp = resp
                    self.bestq = q
        except:
            pass

    def response(self):
        if self.resp:
            if self.resp.fn:
                result = self.resp.fn(self.request, *self.args, **self.kwargs)
                if result:
                    return result

            # the template name ought to be the first argument
            templ = self.args[0]
            base, ext = os.path.splitext(templ)
            if not ext:
                templ = "%s.%s" % (base, self.resp.ext)
            self.args[0] = templ
            self.kwargs['mimetype'] = self.resp.mimetype
        # if there wasn't a response, default to here
        response = render_to_response(
                context_instance=RequestContext(self.request),
                *self.args, **self.kwargs)
        return response

    def __getattr__(self, attr):
        mtypes = None
        if attr not in self.types:
            mtypes = [mt for mt, enc in [mimetypes.guess_type('.'+attr)]
                        if mt]
        else:
            mtypes = self.types[attr]
        if mtypes:
            resp = _ResponseHelper(attr, mtypes, self)
            self.maybeadd(resp)
            return resp
        else:
            return None

def parseAccept(request):
    """
    Turn a request's HTTP_ACCEPT string into a list
    of mimetype/priority pairs.
    Includes a hack to work around an Opera weirdness.
    """
    strings = request.META['HTTP_ACCEPT'].split(',')
    r = re.compile(r'(.*?)(?:\s*;\s*q\s*\=\s*(.*))?$')
    types = []
    for s in strings:
        m = r.match(s)
        q = float(m.groups()[1]) if m.groups()[1] else 1.0
        t = m.groups()[0].strip()
        if t == 'opera/hack':
            break
        types.append((t, q))
    return types