## Yes, 1+2+3+… = -1/12. Sort of.

January 26, 2014

[Update: fixed an error that, while embarrassing, has no real bearing on anything, except perhaps to reinforce the admonition to be careful…]

The internet has been doing a whole lot of hating on this Numberphile video demonstrating (or, if you’re one of the haters, pretending to demonstrate through rank chicanery) that $1 + 2 + 3 + 4 + ... = - \frac{1}{12}$. I first saw it via Phil Plait of Slate’s Bad Astronomy blog, who followed his initial mind-blown and gushing post with a rather abject retraction. Some bloggers seem to be taking it quite personally, perhaps outraged that the charlatan Numberphilics are sullying the august and ethereal beauty of mathametics, ruining it for the rest of us and deluding the ignorant masses. Or something. I don’t get it, maybe because the original video is, if awfully (and admittedly) loose and sloppy and glib, pretty much correct.

Much of the vitriol is in the form of complaints that the manipulations in the video are not valid, that most manipulations with infinite series are not in fact valid, that one has to be very careful with infinite series, and that it’s easy to prove anything you want with innocuous-looking sleight of hand. That’s true: it is very easy to get these things wrong. I agree that Numberphile could have been more emphatic in pointing this out. But everything they do is legit. It’s a near thing — what they do in one context does not work in another — but it’s not by accident or legerdemain that their demonstration gives the right answer.

The first thing to get out of the way is that no, of course the sum doesn’t add up to -1/12 in the conventional way that you learned in elementary school. No infinite series has a sum in quite the way that 1+2=3. When we talk about sums of inifinite series what we really mean is that we have a summation method, a procedure that takes a series and produces a number, the “sum,” and that the procedure has some set of nice properties. Cauchy summation, taking limits of partial sums, is a particularly nice summation method with particularly nice properties, so nice that we have a special name — “convergent” — for the series for which it works. But it’s worth remembering that taking limits is already a non-trivial and potentially subtle operation.

In practice, what it means to say that $1 + 2 + 3 + 4 + ... = - \frac{1}{12}$ is that if you find yourself adding up all the positive integers and expecting to get an answer, that answer is very likely to be $-\frac{1}{12}$. (Or that you’ve made a mistake somewhere, always a possibility.) That’s certainly the case in string theory, where one common treatment is to hit the infinite sum, point out that it’s hard, and then show from other constraints on the theory that the only workable answer is $-\frac{1}{12}$. Yes, we could and arguably should make up some other notation, but as far as I know there’s nothing standard, and we (especially physicists) abuse notation all the time. It is not nonsense, and it really is deep. And weird.

The usual way to justify the result, and the one that most of the internet thinks Numberphile should have used, is through the Riemann zeta function. In fact, Numberphile did use that in their follow-on video and in Tony Padilla’s response to the whole kerfuffle. That’s fine and correct and all, but (IMO) not a very intuitive way of looking at it. What do zeta functions and analytic continuations have to do with this? Lots, actually, but I’m dense and have always found it to be rather opaque. But there are other ways of looking at the problem, to my taste much more satisfying. One fortuitous consequence of this brouhaha has been a lot of links to a post of Terry Tao’s of a few years ago, which I have found enormously helpful. It’s pretty dense, especially if you’re not a mathematician, but worth reading even if you finding yourself glazing over at Bernoulli numbers. It is my chief inspiration in what follows (with the usual caveat the everything I have gotten wrong herein is my fault alone).

Beginning to hint at formality, let’s look at summation methods and their properties. Then we can prove things of the form “if this summation method has these properties, and it works for these series, then it must assign this series this value.” That’s one way to interpret the video sensibly. The problem is that different summation methods allow different manipulations, and figuring out which you can use is subtle. The arguments in the video (if you interpret it as I do) require three different methods, which get progressively more restrictive and finicky about what techniques you can legitimately use. What each of the video’s steps relies on does not work at the next step!

So let’s talk about summation methods and their properties.

Properties first, mostly stolen from wikipedia:

• Regularity: the method gives the ordinary sum for convergent series.
• Linearity: the methods behaves as you would expect when you add series together or multiply them by constants.
• Stability: You can offset a series and get the same answer : $a_1 + a_2 + a_3 + ... = 0 + a_1 + a_2 + ...$. Not all methods, including ones we’ll use, are stable in this sense!
•  [no name that I know of!] We can “expand” the series by sticking zeros before every term and get the same answer:
$a_0 + a_1 + a_2 + ... = 0 + a_1 + 0 + a_2 + 0 + a_3 + ...$
This is not the same as inserting zeros in arbitrary places, including after each term — we can (and will) have series for which $a_0 + a_1 + a_2 + ... \neq a_1 + 0 + a_2 + 0 + a_3 + ...$ !

And some summation methods

• Cesàro summation: even if the partial sums don’t converge, their averages might, and if they do call that the sum.
• Abel summation: if $a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ...$ converges for $x<1$, and if $\lim_{x \to 1} \sum a_i x^i$ exists, call that the sum.
• “Polynomial renormalization”: I don’t know what to call it, but what Terry Tao does in his deservedly much-quoted blog post mentioned above. Basically, we first “regularize” the sum by applying a smooth cutoff function that is 1 at 0 and 0 for $x>=N$. If the result is a polynomial in N plus some residual term that goes to 0 as N goes to infinity, we use just the constant part of the polynomial as the sum. Not coincidentally, this is very reminiscent to what physicists often do in quantum field theory.

Cesàro summation and Abel summation have all the properties I listed; polynomial renormalization, however, is not stable. Any series that is Cesàro summable is also Abel summable (and both give the same answer). I think that Abel summability implies and is consistent with polynomial renormalization (there is probably a proof lurking in the appropriate section of Tao’s article, but I haven’t figured out whether it works here). The series here, however, are all PR-summable, consistently with Abel summation for the ones that are Abel summable.

$S_1 = 1 - 1 + 1 - 1 + ...$.

This is obviously divergent — the partial sums oscillate between 1 and 0 — but it Cesàro summable, as the partials averge out to $\frac{1}{2}$.
Next we look at

$S_2 = 1 - 2 + 3 - 4 + ...$.

This is Abel summable, and we could easily use the explicit formula to calculate its value, but instead we’ll just assume it and calculate indirectly, using linearity and stability and the value we already have for Grandi’s series:

$\begin{array}{ll} & 1 - 2 + 3 - 4 + ... \\ + & 0 + 1 - 2 + 3 - ... \\ = & 1 - 1 + 1 - 1 + ... \\ = & \frac{1}{2} \end{array}$

so $S_2 + S_2 = \frac{1}{2}$ and $S_2 = \frac{1}{4}$.

Now things get subtle. Let’s assume that this carries through to PR summability (it does), and continue. We can no longer use stability — no more offsetting a series — but we can use the expansion-by-zeros trick if we do it right. We compute

$\begin{array}{llc} & &1 + 2 + 3 + 4 + ... \\ - & 4 \times & ( 0 + 1 + 0 + 2 + ... ) \\ = & & 1 - 2 + 3 - 4 + ... \end{array}$

from which we find $S - 4S = \frac{1}{4}$, or $S = -\frac{1}{12}$.

To recap, we

• used Cesàro summation to compute a “sum” for Grandi’s series
• assumed that $S_1$ is Abel summable, and computed its value using Grandi’s series and the properties of Abel summation, and finally
• used that value and the properties of PR summation to compute the original sum.

As promised steps two and three were of the form “if this method works, the result has to be that.” It turns out that they do!

It’s worth looking at some manipulations that superficially appear to be every bit as valid as what we’ve just done but don’t work. First, I’ll state without proof that PR summation leads to the result that

$1 + 1 + 1 + ... = -\frac{1}{2}$

If we subtract this from $1 + 2 + 3 + ...$, we get

$0 + 1 + 2 + 3 + ... = \frac{5}{12} \neq -\frac{1}{12}$

— I warned you offset wouldn’t work here. I also warned you that you couldn’t “expand” a series with zeros in the wrong place, and in fact we can calculate that

$1 + 0 + 2 + 0 + 3 + ... = \frac{1}{24}$

Go figure! Some others:

$0 + 1 + 0 + 1 + ... = -\frac{1}{2} \\ 1 + 0 + 1 + 0 + ... = 0 \\ 0 + 1 + 1 + 1 + ... = -\frac{3}{2}$

So it behooves you to be careful.

## Cat Sonnet

November 9, 2013

Hugo waits, his every muscle taut,
Except for twitching tail uncanny still.
Soon, suddenly, he’ll strike, as swift as thought:
Soon hapless mouse will be but bloody kill.
Then later, after napping, he may play —
Or is it fight? there’s no good way to tell.
Perhaps his human servants’ flesh he’ll flay,
Or make his cat companion’s life a hell.

And now he craves attention, tame and meek,
Imploring only to be stroked and scratched.
He purrs and stretches, lang’rous, long, and sleek.
Gentleness and innocence are matched
With casual violence in the feline race,
Dual natures joined in feline grace.

## The Thin Place

October 31, 2013

The borders of the world are pale now,
See how they grow very thin.
Near to us loom other places
Home to things not quite our kin.
What doors to otherwhere may open,
And what may come in?

An impish goblin, bent on mischief?
A Faerie prince from Tír na nóg?
A witch with her familiar daemon,
Ghostly cat or spectral dog?
Baleful and mysterious figures
Creeping through the fog?

Souls that should be long departed;
Wights and wraiths with sorrow crowned;
Boggarts, brownies, keening banshees
Sounding their unearthly sound;
Kobolds, dwarves, and little men
From caverns underground.

Do not depart, nor take your leave!
Pull the blinds and bar the door ‘gainst
Things of which you daren’t conceive!
Unless you want to see what visits
On All Hallows’ Eve.

## Kindle Fire

November 17, 2011

For the first time in my life, I seem to be the first person I’ve talked to to have the latest electronic toy, one of these. It doesn’t have the enormous coolness factor of something shiny and made by Apple, but still.

Overall I’m loving it. I’m using it more as an extra-portable and convenient web-browser/time-waster than as a reader and Amazon-content-delivery system (its obvious purpose), more cheap-and-small iPad than expensive Kindle. Which was pretty much my plan. It works great for that. It even works for some things I honestly didn’t expect it to, like reading powerpoint presentations (I wasn’t surprised so much that it tried as that it succeeded).

Of course there are some annoying things about it. Many would (I assume) apply to any mobile touch-screen toy—typing is painful, mobile websites are missing bits I take for granted (e.g. formatting in gmail). Some (many? most?) android apps don’t seem to be available in the amazon appstore—that is probably changing even as I write.

But the single most annoying thing is something I had expected to be trivial. I had figured it would be trivial to download a free ebook—from Project Gutenberg, say—and start reading it immediately. Not so! You can download a .mobi file, and then it just sits there refusing to be read. The quickest way I’ve found to get the demmed things to import is to

• open QuickOffice
• Hold your finger down on the file, and pick “Send” when the menu comes up
• Email it to your kindle account
• Wait a while. Wait some more.

Isn’t there a better way to do this? If not, for the love of heaven why?*

On the subject of free ebooks, I’d like to give a shoutout to someone calling herself Cthulhuchick for digitizing the complete works of H. P. Lovecraft and making them available in a nice free edition. Also for picking a great nom d’internet.

* Maybe I’ll answer my own question: Amazon has no interest in my reading stuff I didn’t buy from them.

## Out of Bembo

November 15, 2011

I just got a copy of Out of Oz from the library, and the first thing I notice is that the typeface is back to the Truesdell of the original Wicked, rather than the always lovely but comparatively bourgeois and anodyne Bembo of Son of a Witch and A Lion Among Men. I hope this means the book is as weird as the first one was.

## In Other News, Sun to Rise in East

November 15, 2011

In June 2010, Daniel Klein published a gratuitously silly piece in the Wall Street Journal’s gratuitously silly op-ed section, purporting to show that liberals are stupider than conservatives and libertarians, at least in terms of basic economics. This occasioned some amount of delight among conservatives and libertarians—vide Veronique de Rugy in the reliably obnoxious National Review Online (to be fair, not all conservative pundits fell for it). The “methodology” Klein relied on centered around a series of agree-or-disagree statements, things like “Third World workers working for American companies overseas are being exploited” and “Minimum wage laws raise unemployment,” most of which I would call at best “arguable,” all of which are pretty clearly politically charged in that they challenged various liberal positions, and all of whose “correct” answers were on the conservative/liberal sides of various issues. So, surprise! liberals did worse on the “do I agree with the conservative/liberaltarian author” test than conservatives and liberaltarian.1

To his (relative) credit, Klein eventually realized the problem, and he did a similar exercise with questions slanted the other way (“gun-control laws fail to reduce people’s access to guns”). To what should be no one’s surprise, this time he found that conservatives were the stupid ones. Read about it in the current Atlantic. Klein now thinks what both surveys demonstrate is “myside bias.” Really!

I feel a little sorry for Zeljka Buturovic, Klein’s (possibly more academic) collaborator. According to Klein’s Atlantic article, her goal wasn’t to prove anything about economic literacy, but rather to “[explore] the possibility that ideological differences stem more from differences in people’s beliefs about how the world works than from differences in their basic values.” There might be something to that, although I don’t know how to sort out the causality there (and I haven’t read the journal articles). Since she wasn’t the one who wrote either the WSJ or Atlantic articles, I don’t want to tar her with the same brush. Sorry, Zeljka!

Oh, another annoying thing. The questions and “correct” answers are reported in the most confusing way imaginable. From the Atlantic, citing the questions with the canonically incorrect answers in quotes:

Here’s what we came up with, again with the incorrect response in parentheses: a dollar means more to a poor person than it does to a rich person (disagree); making abortion illegal would increase the number of black-market abortions (disagree); legalizing drugs would give more wealth and power to street gangs and organized crime (agree); drug prohibition fails to reduce people’s access to drugs (agree); gun-control laws fail to reduce people’s access to guns (agree); by participating in the marketplace in the United States, immigrants reduce the economic well-being of American citizens (agree); …

So how many negatives to I have to wade through to figure out what an enlightened person thinks of gun control laws? I’m pretty sure Klein thinks they reduce access to guns (I think that too), but that’s more because that’s what you would ask if your goal is to prove conservatives are wrong than from trying to cancel out the “fail to” and the “incorrect response in parentheses.”

1. For what it’s worth, on the original set of questions I would probably have mostly given the “conservative” answers, but only if “it depends” wasn’t an option, and only if I had inferred the context that would be generally implied from the fact that someone was asking about that particular set of things and using that particular phrasing.

## node.js

November 13, 2011

I’ve been looking at node.js, the server-side (more general than that, really) javascript execution environment. My gut reaction is that I like it–I like it a lot. But I think it will take some serious investigation to determine whether it’s ready for an industrial-strength application.

Now some details, aimed at node-n00bs such as myself. Anyone who would like to point out how badly I’ve gotten things wrong, feel free!

First, it’s important to know that node.js is architected to solve a specific problem, to wit, scalability in the presence of lots of concurrent access. The usual way something like apache/php handles concurrency is to spawn a thread for each server request. But threads have overhead, and there’s only so far you can push that before you have to buy more servers.

node.js has pretty much the opposite philosophy. The buzzwords you see are “asynchronous” and “event-driven” or just “evented”—its central element is a single-threaded event loop. But that doesn’t tell you much about why it’s a good idea. I found a much more revealing tagline here: “everything runs in parallel, except your code.”

The idea is that in a typical (non-trivial) server request most of the processing time is taken up in things like database or filesystem access, henceforth referred to generically (and not always completely correctly) as “IO.” Those are things that either don’t take a lot of CPU cycles, or at least are already in their own threads or processes. If the thread running those IO operations waits for them to complete, it will be sitting idle; in a single-threaded event loop that means it will block anything waiting for it. So in a single-threaded event loop, you don’t wait! IO operations in node.js don’t return their data directly; instead, they accept callbacks to process the results when they’re ready. Those callbacks are themselves processed in the event loop.

The callback functions themselves should all be lightweight, so that tens of thousands of them can be executed per second. They palm off all the hard work to “IO functions,” black boxes that may in turn add more callbacks. The node.js API, and good node.js plugin modules, are structured to make it positively difficult to do anything that blocks.

Here’s a “hello world” webserver, straight from the node.js front page, that responds to any request with, er, “hello world”:

var http = require('http');
http.createServer(function (req, res) {
res.end('Hello World\n');
}).listen(1337, "127.0.0.1");


The guts of that there–the argument to http.createServer–is the callback that gets called from the event loop whenever the server fires the “request” event (you don’t see the event loop yourself; you just add callbacks for events, and they are called from the event loop). It does the actual responding by calling methods on res, a ServerResponse object.

Of course a real response will be more complicated (starting with url-parsing, which I’ll ignore completely). Traditionally that might look something like

http.createServer(function (req, res) {
var data = do_some_io_operation(req);
var output = do_some_more_processing(data);
res.end(output);
}).listen(1337, "127.0.0.1");


But those two function calls, if they do anything IO-ish or otherwise nontrivial, are not kosher node.js. This should be something like this:

http.createServer(function (req, res) {
do_some_io_operation(req, function(data) {
do_some_more_processing(data, function(output) {
res.end(output);
});
});
}).listen(1337, "127.0.0.1");


What happens on a request is:

• The handler calls do_some_io_operation, which registers a callback and fires off whatever the operation is. It–and the handler–return immediately, and processing can move on to anything that’s waiting in the event queue.
• When the io operation completes, the first inner callback gets executed. That calls do_some_more_processing, which registers yet another callback, starts whatever it starts, and returns.
• When THAT finishes, the inner callback finally takes all the data it now has available and finishes responding to the request.

Database access might look something like this (not real code, but it might be close, modulo error handling)

http.createServer(function (req, res) {
dbase.connect('tcp://whateverdbase@localhost/whatever', function(connection) {
var query = connection.query('SELECT * FROM some_table');
query.on('row', function(row) {
res.write('<p>' + row + '</p>');
});
query.on('end', function(row) {
res.end('Hello World\n');
});
});
}).listen(1337, "127.0.0.1");


Connecting to the dbase is an IO operation, and since everything depends on that almost the whole response function is inside a callback. Setting up the query as I have it here doesn’t necessarily do anything immediately, so it does NOT need to take a callback, but that would be an alternative API. However, you do have to wait for the results of the query, so it gets callbacks for its “row” and “end” events.

I rather like this functional quasi-continuation-passing style of programming, mostly because I like finding out the abstruse theoretical concepts turn out to be useful for real. It might get a bit messy in real examples, though. At the very least I wonder if it would call for a different indentation style .

[And I have a “solution” for that. Or at least something that’s kept me happily occupied for the last couple of days. More on it later, if I ever get around to it.]

The fact that this is all in javascript has a few real advantages. Javascript is halfway to being a functional language, and is thus well suited to this style of programming. But it’s not Lisp or Haskell, so existing programmers don’t have to rewire their brains to use it. (I love Haskell, but I can’t imagine trying to find and manage a team to write a Real Product with it.) Indeed, any web programmer will already be fluent in javascript, and used to working with callbacks, if not quite to the pervasive level that node.js requires.

Using the same language on the client and server is a nice benefit, too. It makes it easy to share code between the two sides, something that can be useful (caveat: writing javascript that will work properly in a browser and inside node.js is NOT completely trivial, but it’s usually not that difficult either). And it’s nice for us programmers to avoid the annoying context switches between languages. Going from javascript to python, for example, I am forever forgetting to put quotes around dictionary keys.

And compared to other dynamic languages, javascript on google’s V8 engine, which node uses, is really fast. For “pure” stuff, just function calls and for loops and the like, it appears to be more in the C/C++ range than the Python/Ruby/php range.

Now for the downsides. Actually, for all I know there aren’t any prohibitive ones! But node.js is still relatively new, and I although I think the core is relatively stable the general ecosystem isn’t. There are lots and lots of modules for doing various things, but in this sort of open-source world it’s really difficult to know what you can trust. In the Perl world, for example, I’ve seen CPAN hailed as the greatest thing since sliced bread, but I’ve seen a lot of crap there, and sorting through it can be a real cost.

node.js itself is quite low-level, lower-level even than php. Someone needs to encapsulate even simple things like (e.g.) gathering get and especially post data into dictionaries of query values and data (not that that’s hard, but it does need to happen). There are some embryonic higher-level frameworks—Express looks very promising, and at least does that post-data processing—I am fairly certain there’s nothing anywhere near as mature and trustworthy as Ruby on Rails or Django (or asp.net, if you like that sort of thing).

And conceptually not everyone agrees that eventing is the way to handle concurrency. There are lots of partisans of Erlang and the aforementioned Haskell and even of traditional threading who beg to differ. I’m a bit out of my depth here, so can’t comment usefully.

## Blame Ben Nelson!

August 3, 2011

In any sort of catastrophe it’s important to decide who to blame. A good scapegoat almost never helps to avoid future catastrophes, but who cares? Blame is fun!

So who do we blame for the Sugar-coated Satan Sandwich? Which on its own merits isn’t as bad as it could have been, really… at least civilization didn’t collapse, as was a real possibility; and the creation of the terrifyingly-named Super Congress has the great virtue of postponing the really awful decisions for a few months. No, the real problems with it are the precedents it set, and the fact that our governing class’s (including the punditocracy) has decided to respond to a genuine economic crisis by focusing on the single least relevant factor and as a result implementing the stupidest possible policies. Who can we blame for that?

The obvious answer for liberalish coastal elites like me is the Tea Party. But that’s too easy. Sure, the Tea Partiers are stupid and crazy, and it’s absolutely terrifying how much power they wield. But that is their nature, and you can’t expect them to go against their nature. They’ve said all along that they don’t like government, so by golly they’re going to try to get rid of it any way they can. They’re the mad bomber from Source Code (name whited out in case you don’t want to know what I’m very mildly spoiling) (paraphrasing, because I forget the real quote): “We can build a new and better society out of the rubble. We just need to make the rubble first.” No wonder they’re cross that they got everything they asked for: they didn’t hold the government and economy hostage to have their demands met; they made their demands so they could shoot the hostage. This is crazy, but, well, they’re crazy, and have a pretty credible insanity defense. “Forgive them, father, for they know not what they do.”

The sane Republicans, insofar as that’s not an oxymoron now, are considerably more culpable. They at least knew what they were doing, and I can only hope they’re full of self-loathing now. This goes double, of course, for Boehner and Cantor, who remind me of Count Baltar from the old Battlestar Galactica (Baltar from the NEW Battlestar Galactica is much more interesting), allying with the sworn enemies of their very existence. But I can almost understand them; like so many politicians through history, they’re acting out of fear, terrified for their jobs. Well, Cantor is trying to get a better job, but close. Again, you can’t expect someone to against their nature.

How about the Republicans’ corporate masters? We’re getting closer here. At the risk of having Godwin’s Law justifiably invoked, they are treading perilously close to the Weimar-era German business interests who backed the Nazis, thinking they would be easily-controlled tools. I do hope the Chamber of Commerce is reconsidering its priorities. But business leaders in our society, even the allegedly brilliant ones, are always short-sighted, greedy, and stupid, so again you sort of have to expect this.

Pundits? Journalists? Bloggers? Well, yes, they’re mostly pretty bad. But there’s only so much pundits can do to influence the national debate, as much as they would like to think otherwise.

It’s fashionable among my progressive brethren and sisteren to blame Obama for all this. Now I won’t pretend that I’m not disappointed in Obama, but honestly, what could he have done? Obama really wants to make deals and compromise and bring people together, not productive strategies in dealing with modern Republicans, but I honestly don’t know what anyone could have done, no matter how tough. You can’t play chicken with someone who WANTS a head-on collision. Yeah, I know, he should have seen this coming, but IIRC that was just after the LAST hostage crisis; even if Obama could have imagined the teabaggers were that crazy it would have been difficult to continue. It’s kind of like the way we didn’t attack the Russkies in August 1945.

So who do I blame? How did we get to this point anyway? The proximate cause is of course the great Shellacking of 2010, an inevitable result of a lousy economy and a high unemployment rate (and a fickle Democratic base). But one lousy election—one where only one house of Congress changed parties—really shouldn’t have sent the nation’s political discourse so quickly off the sanity cliff. I can easily imagine a parallel universe in which the Democrats lost the House not to the Tea Partiers but to more or less sane Republicans. My theory is that the underlying problem is in what should have been Obama’s greatest triumph, health care reform. You’d think that people would be pleased with some guarantee of health insurance, but no, they HATE it. And it was that hatred (say I) more than anything else that allowed the teahadists to sweep into office, so cowing Democrats that in their confusion and terror they willingly stipulated to the Republican view of the world.

Obviously much of the anti-Obamacare bile is due to Republican lies, not just the flashy ones like Death Panels, but the underlying lie that our health care system is The Best In The World (something I think you can only believe if you’ve never actually meant someone from another country). But lies can only get you so far. As I see it the real problem was with the Democrats themselves, specifically with the Senate Democrats, for doing such a miserable job of getting the the thing passed, and inviting the world in for an open house at the Sausage Factory. The most egregious example (that I know of; perhaps were even worse things that weren’t so obvious) was the legendarily odious Cornhusker Kickback—even though it didn’t make it into the final bill, it was so appalling that it rather justifiably convinced the world that the whole process was corrupt and its result inevitably evil. So, on behalf of the Tea Party, thank you Ben Nelson!

## Historical Writing

August 2, 2011

One more note on the previous post:

It’s not fair of me to expect the same level of juicy detail from ancient history as there is (or can be) in classical or modern history; there is simply more information about the more recent periods. As far as I know the Egyptians had no analogs of the wonderfully chatty and gossipy (and unreliable) classical historians, and although they were great record-keepers there just isn’t the same sort of detail we have about more recent times. But I’ll compare anyway. Here’s the sort of thing I love, from Strange Victory , by Ernest May (which I recommend!):

Canaris was a strange character. There were many such, of course, at the center of the Third Reich, but Canaris stands out among them…. Five feet four, with prematurely white hair, he detested tall men and, above all, tall men with small ears. The loves of his life were Seppel and Sabine, two dachshunds from whom he was almost inseparable. When traveling, he would require from his staff frequent reports on their apparent emotions and their bowel movements…

…[A]other possibility is anger over encroachments in his domain by Reinhard Heydrich, the head of Himmler’s security service. Outwardly, his relationship with Heydrich was cordial; the Canarises and the Heydrichs were neighbors and dined together. But Heydrich was thought repulsive by men who had no such reaction to Himmler or Bormann. Also, he was very tall and had small ears.

## The Rise and Fall of Ancient Egypt

August 2, 2011

For years I’ve loved ancient history without really knowing much about it. I’ve never been able to keep the Babylonians and Assyrians completely straight, let alone the Akkadians, and what any of them have to do with the Sumerians (my knowledge of whom comes primarily from having seen The Mole People at an impressionable age). I’ve never been quite sure of who the Hittites and Amorites and Mittanians and Chaldeans were. Those Sumerian city states have great names like Uruk and Ur and Lagash and Eridu but (apart from Abraham’s alleged origin in Ur of the Chaldees; doubtless he left to escape the mole people) I have no idea which is which or why we should care. The comings and goings of the MInoans and Mycenaeans and Achaeans and Dorians and other proto-Greeks are a mystery to me, providing only a little background for the Iliad and Odyssey and some half-remembered Mary Renault novels.

And then there’s Egypt. Egypt is far more familiar to us American rubes than Babylonians and Assyrians and Mittanians. It’s hard not to be exposed to all sorts of things about Egypt, romantic things like pyramids and mummies and sarcophagi for cats and animal-headed gods and King Tut. Yul Brynner and Anne Baxter (“Mo-o-o-ossses…”) are deeply ingrained as my Platonic forms of Pharaoh and Pharaohess (Pharaoness? Pharessa?). But the ancient Egyptian civilization lasted for 3,000 years. That is a long time, half as long again as it’s been since it it became part of Rome until now, and apart from “Cleopatra was at the end” all that history is compressed in our collective unconscious into an undifferentiated mass of pyramids and mummies and sarcophagi for cats and animal-headed gods and King Tut and Yul Brynner and Anne Baxter.

So when I read about Toby Wilkinson’s The Rise and Fall of Ancient Egypt, a popular history of the entire civilization, I hied me to the library and found a copy. It serves its purpose well: I’m much clearer—well, somewhat less vague—on what the difference between the Old and Middle and New Kingdoms, and where Memphis was (and why), and who the Hyksos and the Sea People were, and even to a certain extent which king was which and which did what. I know now where the word “Pharaoh” comes from (“Per-aa,” or “Great House,” applied metonymically to its inhabitant) and why (awkwardness about what to call Hatshepsut—there was no such thing as a “Queen”). The book is easy to read, and it zips right along. If you’re looking for a summary of all of ancient Egyptian history, bearing in mind that a “summary of all of ancient Egyptian history” will necessarily be a few hundred pages, this is it.

One of the highlights (I thought) comes at the very beginning of the Egyptian kingdom. Among the artifacts of that era, now a highlight of the Egyptian Museum of Cairo (and of Adrian Veidt’s—Ozymandias’s!—office in Watchmen) is the Narmer Palette. That’s “palette” as in a thing for mixing paints, in particular cosmetics; the ur-Eqyptians apparently used them as ceremonial objects in those days, a development of sacred face painting among in nomadic cultures or some such. The Narmer palette (apparently) commemorates the very unification of the Two Kingdoms of Egypt under Narmer, Egypt’s possibly mythical first king and founder of its First Dynasty. The obverse is full of Mespotamian motifs: the king as a bull destroying a walled town, those “serpopards” whose intertwined necks frame the pigment-mixing depression. The reverse is more distinctly Egyptian, with the now-human king wearing the crown of Upper Egypt. Although to my untrained and untrustworthy eye there is still some of Mesopotamia there—the king’s calf muscles look distinctly Assyrian to me, and I wonder how much those cows at the top (proto-Hathors, apparently) owe to Mesopotamian predecessors—the Narmer palette represents the very beginning not only of the Egyptian kingdom but of Egyptian, which (again to my untrained eye) remained remarkably consistent for millennia thereafter.

But this isn’t a perfect book. Reading it is a great first step in understanding Egyptian history, but all those kings still blur together. To some extent, I suppose that’s unavoidable—there were a lot of them, after all, and really I don’t know that there is always much to distinguish them. But Wilkinson’s (I hate to say it) somewhat cliched writing doesn’t really help. There are too many “brilliant demonstrations of the unite-and-rule concept” and “brilliant flashes of inspiration” and “brilliant but simple expedients.” I’m sure there is precious little that can really be said about any individual, king or otherwise, especially given Egyptian kings’ perennial habit of erasing all record of their predecessors, but I would desperately like more indivuating details, and anecdotes.

And I’d also like to know more about the Egyptian religion (or should that be religions?). Gods and priests and temples obviously permeate Egyptian history and life, but really all I can say I got from the book is that there were a confusingly large number of gods, whose cults were more or less important in various cities. How did those cults develop, and split and merge, and relate to each other in general? How did perceptions of the gods and details of their cults change over time? What did all those priests do all day, anyway?

So, definite thumbs up, but I am left wanting more. Which is certainly better than left wanting less—