Unfinished Business

The problem with scheduling something like last week’s Ask a ScienceBlogger for a time when I’m out of town is that any interesting discussions that turn up in comments are sort of artificially shortened because I can’t hold up my end of the conversation from a remote site. I do want to respond (below the fold) to a couple of points that were raised in the comments, though, mostly having to do with my skepticism about the Singularity.

(Side note on literary matters: When I wrote dismissively that the Singularity is a silly idea, I didn’t realize that I was going to spend the flight down to Knoxville on Tuesday reading and enjoyng Vernor Vinge’s latest, Rainbows End. Short review: even if people like Ray Kurzweril do silly things with his ideas, Vinge is a very sharp writer, and you should check this book out.)

There were really two comments I wanted to respond to that I didn’t have time to give the answers they deserve. First up was “Dr. Pain”:

“As PZ said, there’s no biological way for a population of six billion to be taken out in five generations…”

Tell that to the passenger pigeons.

Shift your argument around a bit: “In the next hundred years, we’re not only going to figure out how to build small computers that communicate wirelessly with a global network, but also do six billion procedures to provide those computers to every man, woman, and child now alive? I don’t think so.” And yet in about 20 years we’re more than halfway towards giving every adult alive a cell phone.

The point of the Singularity is not how it ends, but rather that power of an ever-increasing rate of change. I don’t know where we’ll be in a hundred years, but I doubt it will be “more or less” the same as today’s world. (Heck, today’s world is arguably not the same as the world of 1900…)

First and foremost, there’s a big difference between mass extinction and evolution into a new form. The last passenger pigeon to die at human hands was more or less biologically identical to the first one. The claim that we won’t be around because we’ll all be wiped out by plagues or famines or global thermonuclear war is very different than the claim that our descendants will have transcended into something beyond our current definition of human. I don’t belive we’ll go extinct, either, as it happens (again, six billion plus individuals, spread over several continents and a host of different environments would be hard to wipe out completely), but that wasn’t the claim I was taking issue with there.

Second, I don’t think the cell phone example is a good one at all. We might be halfway to getting every adult in the US, Europe, and Japan a cell phone, but I even doubt that. The vast bulk of humanity remains blissfully cell-phone-free. It’s a serious mistake to think that what happens in the wealthier parts of the wealthier nations is a good indicator of the general state of the species.

Finally, as for the question of whether the world will be different a hundred years hence, of course it will. But that wasn’t the questons I was supposed to answer– what the Seed honchos asked was whether the human race will still be around in a hundred years. While it’s undeniably true that the world is a very different place than it was in 1906, I wouldn’t say that the human race had reall changed significantly in the last hundred years, or even the last thousand. People today are pretty much the same as people in the past, and will continue to be pretty much the same as they are now a hundred years from now.

The second argument I wanted to discuss further is from Michael Nielsen:

Vinge’s core argument (on which he has several variations) seem to be very simple:

(1) We can expect computers that exceed human intelligence in the relatively near future.

(2) Once (1) occurs, on a very short timescale we should expect computers that enormously exceed human intelligence.

(3) Point (2) will result in an enormous burst of change that will very rapidly change the entire world.

As I said very hastily from Knoxville, I don’t really buy Step 1)– lots of people have said that AI is close lots of times in the past, and they haven’t been right. I think the concept of “intelligence” remains pretty slippery, so that it’s not even entirely clear what is meant by having computers exceed human intelligence, or how we’ll know when it happens. I don’t think it’s a sure thing that we’ll get something that everybody agrees is an intelligent computer in the near future– “intelligence” is not only a hard problem, it’s a moving target.

The next step in the chain is the claim that once we have intelligent computers, they’ll very quickly make more intelligent computers, which will make more intelligent computers, and so on. Given that I think AI is a Hard Problem to beging with, I’m not sure I believe that this is a trivial step. I think the argument is that once we’ve figured out what makes a smart computer, either we or they will be able to leverage that into some unified theory of intelligence, that can be used to design the next step. That’s at least not a priori foolish, but I’m a little skeptical that it will go quickly.

The third step also has its problems. I’m not sure that the various problems we have are so much a matter of a lack of intelligence so much as a lack of resources, both physical and informational. To pick an example close to Michael’s field, we don’t have working quantum computers today not because of a lack of good ideas, but because of practical problems. The issue isn’t intelligence, it’s lack of information about the real parameters of real experiments– until you start making ion traps, you don’t really know what the decoherence rates will be like, or how difficult it will be to shuttle ions around in a trap, or any of the other technical hurdles that the experiments have hit over the years.

To some degree, these are problems that can be (and are being) solved through applied intelligence– when a problem arises, somebody will find a clever way around it– but I don’t think that intelligence is really the bottleneck. The limiting factor in the growth of new technologies is not the ideas, but rather the practicalities. There’s a lot of grunt work involved in keeping science marching along, and that stuff takes time, even when you’ve already got good ideas to work with. The essential assumption of step three in the Singularity argument is that having more intelligent computers working on the ideas will make things go incredibly fast, but I’m not sure that’s true, at least not to the degree required for some of the predictions.

(And that’s even leaving aside the problems of unintended consequences and straight-up physical constraints. Absent big loopholes in some of the laws of physics, it’s going to be hard to generate some of the things the Singularity is supposed to provide.)

But in a way, this is all beside the point– again, the question wasn’t “Will the world be different in 2106?”, it was “Will the ‘human’ race be around?” And as I said above, I think the vast bulk of humanity will be more or less the same a hundred years from now that it is now– there are just too many of us, and we age too slowly for the species as a whole to become unrecognizable in a hundred years.

I shouldn’t really plug this before I finish the book (I’m less than a hundred pages in, and keep getting sidetracked by other things), but I think that there’s probably more real insight in a book like Geoff Ryman’s Air, which deals with the impact of new technology on poor people in Asia than there is in Charlie Stross’s Accelerando, which is largely about the wonders of the Singularity. It’s too easy to forget that the vast majority of people don’t have phones, period, let alone wireless access to the Internet, and that whatever explosion of miracles and wonders we’re seeing here in the First World has not had the same impact on Africa or Asia that it has here. It’ll take some really remarkable economic shifts to move things around enough for the species as a whole to be changed.

5 comments

  1. My main comment here is, oh dammit, I forgot to review my copy of Rainbows End, and I should really get around to doing that. The mini-review is… I wasn’t all that impressed. I rarely say this, but, RE could have and should have been twice as long as it was. I don’t think any of the characters or concepts really got the attention and depth they deserved.

    My less important comments are these:

    The whole discussion of “whether the human race will exist in 2106” inevitably leads to Nick Bostromite proctophilosophical arguments of existential risk. They leave me cold because just about everyone in the discussion seems to have a desired future that they argue toward, and it’s extremely tough to dent that sort of argument.

    Want an argument that thehuman race won’t be around in 100 years? Okay. Assume in 2056 that there are planetary resources to convert one (1) human being into something universally recognized as not human and better than human that doesn’t go apeshit and kill everyone. Now assume, with little or no justification, that that conversion process goes on a Moore’s Law like curve, and that 18 months later, planetary resources can convert two people, etc, etc. Okay, great. Do the math, and by 2106, you’ve been able to convert 10 billion people, and you can convert ten billion more. That’s in the ballpark, yes? (And a true Kurzweilian analysis would fit a double exponential curve and get a much bigger number.)

    Want to shoot it down? Change the doubling time to two years, rather than 18 months. Now you’re down to 33 million by 2106. (And a serious analysis might leave the doubling time alone but point out that a planetary culture that wealthy might have a much higher population growth. Or that some would go for this whole hog while others would eschew it. Or that planetary resources wouldn’t all be going toward this, but to other stuff as well. Or that the first guy transformed in 2056 might not just be sitting around, but might want to apply a significant portion of planetary resources to get to some unspecified version 3.0. Or any number of other objections that can still come up under the stipulation that it’s really possible in the first place.)

    Those exponential and double exponential curves are tricky things. They’re really useful for predicting, “This could/is likely to/is gonna happen a lot sooner than you think,” but pretty helpless at predicting exactly when.

  2. I think you’re wrong about the cell phone question. Dr. Pain may actually be right about being halfway to having a cell phone for every adult. All I could find was news reports on this, but what I read said that about 200 million people in the US have cell phones. 364 million in Europe. About 90 million in Japan. Oh, and 383 million in China. That’s a pretty sizable percentage of the world already, and it doesn’t even include the rest of Asia or South America.

    Of course, this doesn’t really tell us anything about the Singularity, but it does tell us that really, truly mass penetration of a product is possible.

  3. I never understood why people assume that intelligent computers will be able to make much more intelligent computers. Isn’t it much more likely that if we ever crack the AI nut, we’ll make computers about as smart as us, and they’ll have no better insight to making things smarter than we do?
    That of course leaves aside the question of WHY these AIs will spend all their time working on things we think are interesting. I personally think they’ll spend all their time surfing the web, reading blogs, having trivia contests about 80’s hair bands, etc.

  4. Lots of people who can’t have access to wired phones have wireless, in the Third World. It happens because it’s much easier/cheaper to roll out a radio communications network than it is to lay out a large wire network (and a cell phone is just a personal radio, run the Japanese transmogrifier to make it small and oooooooh, teh shiny!).

    CORRECTION: I was going to be surprised about Accelerando, but it turns out that the author who does a nice story in a possible Singularity type future where the Singularity is actually averted is Ken MacLeod, in the novel The Star Fraction.

    Also, a quote from a post back in January when Jack William Bell (who seems to have disappeared from LiveJournal?) was going nuts on this Singularity meme. He firmly believes that not only will it happen but that we are in the middle of it. I ripped his argument a new one.

    ;;————— (yes, I section my code this way)—–

    We’ve done the cloning thing, and if it were as easy as genetic engineering, we’d be awash in infertile people lining up to have kids (the sort who aren’t treatable with the current state of the art in infertility treatments); this presumes that genetic engineering of the sort under consideration, augmentation or instant evolution, as opposed to creating sterile plants with huge yields or adding fluorescent markers to bacteria for research, becomes economical, much easier, and socially acceptable among our elite, who will control access until it becomes as commonplace as a checkup. People seem to forget that Dolly was one of something like 500 attempts, and according to Ian Wilmut (who led the team that created Dolly in an NPR interview this past week, the South Korean team just showed us that technology ain’t anywhere near up to the task.

    The quest for artificial intelligence has made some progress, but nothing is foreseeable within the next fifty years without revolutions on par with going from the industrial age to the information age, and not for lack of trying over the last fifty years. What we make know are “intelligent systems”, good heuristics, behavior triggered by Bayesian statistics, and so on, not true intelligence of the sort predicated. Many endeavors that were using neural networks eventually abandoned those, both because it turned out that the decision-making rules formed were craptastic and because it’s so hard to figure out how to train them in ways that won’t create said craptasticacity. The rat “brain” hooked up to the flight sim was really only a few cells, so perhaps the neural networks we want will actually be brains in vats (Holy Shades of Futurama, Batman!).

    On the computer front, we already know that Moore’s Law is dead within 10 years, on quantum mechanical principles. Intel has announced an experimental 40nm process. Exotic (and absurdly expensive) technology might push that one more generation, to 25-30 nanometers, if I remember the roadmap. Apple and MS are both really innovating incrementally on the OS front. Future advances must come from thinking differently, thinking smarter, not smaller. If anyone’s currently doing that, they’re gonna be very, very rich, and odds are none of us know said person/people, if they even exist yet. I bet that we see a lot of fun stuff come out of accessibility research, with excellent interfaces coming from work aiming to build interfaces that improve the lives of the disabled by letting them interact with information transparently, irregardless of operating system or file format or all sorts of details that are irrelevant to the goal. Mayhaps this would be a good topic for those bemoaning hard SF to explore; a good interface, that’s something that will last hundreds or thousands of years (just look at forks and knives, or chopsticks, or QWERTY).

    The Singularity is BS. It’s what you get when non-scientists believe entirely too much of the press releases everyone uses to build hype for their work in the perpetual chase for grant money. Perhaps that’s the future: Science Fiction II: The Search For More Grant Money. There are ultimate physical limits to what we can do, not to mention practical limits entirely separate from those physical limits.

    Not to mention, it’s certainly not true that it’s impossible to foresee the future, macguffin or no; just look at Firefly. What’s needed is simply the temerity and imagination that the job has always called for and the best writers have always had in spades, as well as willingness to face up to the fact that, no matter what you predict the future will be, there will be ways in which you will be wrong and ways in which you will be right, and the willingness to try one’s hand despite the guarantee of getting vast swaths of it right. A good story is a good story. I suppose we could outsource our desire for hard SF, because there’s a couple of billion people whose opinion we haven’t asked about this yet, I think. Maybe a few of them can see beyond the Singularity craze, to produce some material worth tossing at the Mind’s Eye. Perhaps a side job for some anime scriptwriters, they produce some really fun stuff to see, though for my money, I’d rather have both, not one or the other.

    ;;——————————————————


  5. “The claim that we won’t be around because we’ll all be wiped out by plagues or famines or global thermonuclear war is very different than the claim that our descendants will have transcended into something beyond our current definition of human.”

    The original question said nothing about transcension (or biological agents) — that was your addition by way of PZ. Weaponry seems like a perfectly good way for humans to wipe themselves out. And given our dependence on technology, we wouldn’t have to do that good a job to ensure that the societal collapse would claim the remainder.

    But the other point of the passenger pigeon and the cell phone examples was to remind you that humans are notoriously bad at estimating change and more so with accelerating rates of change. People pooh-poohed the notion of wiping out an entire species as populous and inexhaustible as the passenger pigeon; they scoffed at the idea that within a generation everyone in the world would own a cell phone.

    Another interesting point about the cell phone example: People used to say that it would take hundreds of years before every person on Earth had a phone. Even if there was enough ready copper to manufacture the required lines, the effort to string the lines and build the infrastructure was immense. Then cell phones and fiber optics came along and changed the game; it was no longer necessary to string wires, only to erect a tower every few miles and connect them with glass wires. And if there was any real incentive, Iridium could provide most of the backbone for true global coverage without any towers.

    The point is that the rules change in surprising and unexpected ways. In the original post Chad said:

    In the next hundred years, we’re not only going to figure out how to implant supercomputers in the human brain, but also do six billion procedures to provide those computers to every man, woman, and child now alive? I don’t think so.

    Probably not. That’s copper wire thinking. We’ll come up with a genetic modification carried by a ubiquitous virus so that all newborns grow their own cranial computers. Or we’ll seed the ionsphere with nano-machines that use the high energy potentials to construct computers that will float down through the atmosphere and attach themselves to wherever they land. Or some other strange notion.

    If I had to place a bet, I’d say that there will be recognizable humans around 100 years from now, but thinking back on the changes of the last 100 years and projecting forward, I think the human culture of 2106 is going to be far more different than 2006 is from 1906.

Comments are closed.