Transliteracy in the Long Now

I subscribe to Stewart Brand's Long Now mailing list in which he summarises talks by various luminaries which always seem to happen in San Francisco which always means I cannot go… his notes (below) on Vernor Vinge's talk prompts some interesting questions about how transliteracy could develop in the far far future – what Brand calls the Long Now.
(Note: Vinge's detailed notes for this talk, and the graphs, may be found online )
Vinge began by declaring that he still believes that a Singularity event in the next few decades is the most likely outcome— meaning that self-accelerating technologies will speed up to the point of so profound a transformation that the other side of it is unknowable. And this transformation will be driven by Artifical Intelligences (AIs) that, once they become self-educating and self-empowering, soar beyond human capacity with shocking suddeness.

He added that he is not convinced by the fears of some that the AIs would exterminate humanity. He thinks they would be wise enough to keep us around as a fallback and backup— intelligences that can actually function without massive connectivity! (Later in the Q&A I asked him about the dangerous period when AI’s are smart enough to exterminate us but not yet wise enough to keep us around. How long would that period be? “About four hours,” said Vinge.)
Since a Singularity makes long-term thinking impractical, Vinge was faced with the problem of how to say anything useful in a Seminar About Long-term Thinking, so he came up with a plausible set of scenarios that would be Singularity-free. He noted that they all require that we achieve no faster-than-light space travel.
The overall non-Singularity condition he called “The Age of Failed Dreams.” The main driver is that software simply continues failing to keep pace with hardware improvements. One after another, enormous billion-dollar software projects simply do not run, as has already happened at the FBI, air traffic control, IRS, and many others. Some large automation projects fail catastrophically, with planes running into each. So hardware development eventually lags, and materials research lags, and no strong AI develops.
To differentiate visually his three sub-scenarios, Vinge showed a graph ranging over the last 50,000 and next 50,000 years, with power (in maximum discrete sources) plotted against human populaton, on a log-log scale. Thus the curve begins at the lower left with human power of 0.3 kilowatts and under a hundred thousand population, curves up through steam engines with one megawatt of power and a billion population, up further to present plants generating 13 gigawatts.
His first scenario was a bleak one called “A Return to MADness.” Driven by increasing environmental stress (that a Singularity might have cured), nations return to nuclear confrontation and policies of “Mutually Assured Destruction.” One “bad afternoon,” it all plays out, humanity blasts itself back to the Stone Age and then gradually dwindles to extinction.
His next scenario was a best-case alternative named “The Golden Age,” where population stabilizes around 3 billion, and there is a peaceful ascent into “the long, good time.” Humanity catches on that the magic ingredient is education, and engages the full plasticity of the human psyche, empowered by hope, information, and communication. A widespread enlightened populism predominates, with the kind of tolerance and wise self-interest we see embodied already in Wikipedia.
One policy imperative of this scenario would be a demand for research on “prolongevity”— “Young old people are good for the future of humanity.” Far from deadening progress, long-lived youthful old people would have a personal stake in the future reaching out for centuries, and would have personal perspective reaching back for centuries.
3.bmpThe final scenario, which Vinge thought the most probable, he called “The Wheel of Time.” Catastrophes and recoveries of various amplitudes follow one another. Enduring heroes would be archaeologists and “software dumpster divers” who could recover lost tools and techniques.
What should we do about the vulnerabilities in these non-Singularity scenarios? Vinge’s main concern is that we are running only one, perilously narrow experiment on Earth. “The best hope for long-term survival is self-sufficient off-Earth settlements.” We need a real space program focussed on bringing down the cost of getting mass into space, instead of “the gold-plated sham” of present-day NASA.
There is a common critique that there is no suitable place for humans elsewhere in the Solar System, and the stars are too far. “In the long now,” Vinge observed, “the stars are not too far.”
–Stewart Brand
Stewart Brand —
The Long Now Foundation –
Seminars & downloads: