Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Saturday, 26 January 2013

A Singular Thesis

The information revolution has brought our planet to an inflexion point. This is our generation's industrial revolution, and conventional wisdom of all sorts is suddenly in doubt. But is the universe really about to wake up? Are we about to look into the face of God? Ray Kurzweil thinks so.


Julian Jaynes rounds out his wonderful The Origins of Consciousness in the Breakdown of the Bicameral Mind with a sanguine remark that the idea of science is rooted in the same impulse that drives religion: the desire for “the Final Answer, the One Truth, the Single Cause”.

Nowhere is this impulse better illustrated, or the scientific mien so resemblant of a religious one, than in Ray Kurzweil’s hymn to forthcoming technology, The Singularity Is Near. For if ever a man were committed overtly - fervently, even - to such a unitary belief, it is Ray Kurzweil. And the sceptics among our number could hardly have asked for a better example of the pitfalls, or ironies, of such an intellectual fundamentalism: one one hand, this sort of essentialism features prominently in the currently voguish denouncements of the place of religion in contemporary affairs, often being claimed as a knock-out blow to the spiritual disposition. On the other, it is too strikingly similar in its own disposition to be anything of the sort. Ray Kurzweil is every inch the millenarian, only dressed in a lab-coat and not a habit.

Kurzweil believes that the “exponentially accelerating” “advance” of technology has us well on the way to a technological and intellectual utopia/dystopia (this sort of beauty being, though Kurzweil might deny it, decidedly in the eye of the beholder) where computer science will converge on and ultimately transcend biology and, in doing so, will transport human consciousness into something quite literally cosmic. This convergence he terms the “singularity”, a point at which he expects with startling certainty that the universe will “wake up”, and many immutable limitations of our current sorry existence (including, he seems to say, the very laws of physics) will simply fall away.

Some, your correspondent included, might wonder whether, this being the alternative, our present existence is all that sorry in the first place.

But not Raymond Kurzweil. This author seems to be genuinely excited about a prospect which sounds rather desolate, bordering on the apocalyptic, in those aspects where it manages to transcend sounding simply absurd. Which isn’t often. One thing you could not accuse Ray Kurzweil of is a lack of pluck; but there’s a fine line between bravado and foolhardiness which, in his enthusiasm, he may have crossed.

His approach to evolution is a good example. He talks frequently and modishly of the algorithmic nature of evolution, but then makes observations not quite out of the playbook, such as: “the key to an evolutionary algorithm ... is defining the problem. ... in biological evolution the overall problem has always been to survive” and “evolution increases order, which may or may not increase complexity”.

Kurzweil seems to be genuinely excited about a prospect which sounds rather desolate, bordering on the apocalyptic, wherever it manages to transcend sounding simply absurd. Which isn’t often.
But to suppose an evolutionary algorithm has “a problem it is trying to solve” - in other words, a design principle - is to emasculate its very power, namely the facility of explaining how a sophisticated phenomenon comes about *without* a design principle. Evolution works because organisms (or genes) have a capacity - not an intent - to replicate themselves. Nor, necessarily, does evolution increase order. It will tend to increase complexity, because the evolutionary algorithm, having no insight, is unable to “perceive” the structural improvements implied in a design simplification. Evolution has no way of rationalising design except by fiat. The adaptation required to replace an overly elaborate design with more effective but simpler one is, to use Richard Dawkins’ expression, an implausible step back down “Mount Improbable”. That’s generally not how evolutionary processes work: over-engineering is legion in nature; economy of design isn’t, really.

This sounds like a picky point, but it gets to the nub of Kurzweil’s outlook, which is to assume that technology evolves like biological organisms do - that a laser printer, for example, is a direct evolutionary descendent of the printing press. This, I think, is to superimpose a convenient narrative over a process that is not directly analogous: a laser printer is no more a descendent of a printing press than a mammal is a descendent of a dinosaur. Successor, perhaps; descendant, no. But the “exponential increase in progress” arguments that Kurzweil repeatedly espouses depend for their validity on this distinction.

The “evolutionary process” from woodblock printing to the Gutenberg press, to lithography, to hot metal typing, to photo-typesetting, to the ink jet printer (thanks, Wikipedia!) involves what Kurzweil would call “paradigm shifts” but which a biologist might call extinctions; each new technology arrives, supplements and (usually) obliterates the existing ones, not just by doing the same job more effectively, but - and this is critical - by opening up new vistas and possibilities altogether that weren’t even conceived of in the earlier technology - sometimes even at the cost of a certain flexibility inherent in the older technology. That is, development is constantly forking off in un-envisaged, unexpected directions. This plays havoc with Kurzweil’s loopy idea of a perfect, upwardly arcing parabola of utopian progress.

It is what I call “perspective chauvinism” to judge former technologies by the standards and parameters set by the prevailing orthodoxy - being that of the new technology. Judged by such an arbitrary standard older technologies will, by degrees, necessarily seem more and more primitive and useless. The fallacious process of judging former technologies by subsequently imposed criteria is, in my view, the source of many of Ray Kurzweil’s inevitably impressive charts of exponential progress. It isn’t that we are progressing ever more quickly onward, but the place whence we have come falls exponentially further away as our technology meanders, like a perpetually deflating balloon, through design space. Our rate of progress doesn’t change; our discarded technologies simply seem more and more irrelevant through time.

Evolutionary development is constantly forking off in unexpected directions. This plays havoc with Kurzweil’s loopy idea of a perfect, upwardly arcing parabola of utopian progress.
Kurzweil may argue that the rate of change in technology has increased, and that may be true - but I dare say a similar thing happened at the time of the agricultural revolution and again in the industrial revolution - we got from Stephenson’s rocket to the diesel locomotive within 75 years; in the subsequent 97 years the train’s evolution been somewhat more sedate. Eventually, the “S” curves Kurzweil mentions flatten out. They clearly aren’t exponential, and pretending that an exponential parabola might emerge from a conveniently concatenated series of “S” curves seems credulous to the point of disingenuity. This extrapolation into a single “parabola of best fit” has heavy resonances of the planetary “epicycle”, a famously desperate attempt of Ptolemaic astronomers to fit “misbehaving” data into what Copernicans would ultimately convince the world was a fundamentally broken model.

If this is right, then Kurzweil’s corollary assumption - that there is a technological nirvana to which we’re ever more quickly headed - commits the inverse fallacy of supposing the questions we will ask in the future - when the universe “wakes up”, as he puts it - will be exactly the ones we anticipate now. History would say this is a naïve, parochial, chauvinistic and false assumption.

And that, I think, is the nub of it. One feels somewhat uneasy so disdainfully pooh-poohing a theory put together with such enthusiasm and such an energetic presentation of data (and to be sure, buried in Kurzweil’s breathless prose is plenty of learning about technology which, if even half-way right, is fascinating), but that seems to be it. I suppose I am fortified by the nearby predictions made just seven years ago, seeming not to have come anything like true just yet:

“By the end of this decade [i.e., by 2010] computers will disappear as distinct physical objects, with displays built in our eyeglasses and electronics woven into our clothing”

On the other hand I could find scant reference to “cloud computing” or equivalent phenomena like the Berkeley Open Infrastructure for Network Computing project which spawned schemes like SETI@home in Kurzweil’s book. Now here is a rapidly evolving technological phenotype, for sure: hooking up thousands of serially processing computers into a massive parallel network, giving processing power way beyond any technology currently envisioned. It may be that this adaptation means we simply don’t need to incur the mental challenge of molecular transistors and so on, since there must, at some point, be an absolute limit to miniaturisation, as we approach it the marginal utility of developing the necessary technology will swan dive just as the marginal cost ascends to the heavens; whereas the parallel network involves none of those limitations. You can always hook up yet another computer, and every one will increase performance.

I suppose it’s easy to be smug as I type on my decidedly physical computer, showing no signs of being superseded with VR Goggles just yet and we’re already three years into the new decade (he also missed the mobile computing revolution, come to think of it), but the point is that the evolutionary process is notoriously bad at making predictions (until, that is, the results are in), being path-dependent as it is. 


You can’t predict for developments that haven’t yet happened. Kurzweil glosses over this shortfall at his theory’s cost. 

A version of this article was first published on Amazon in 2010.

Monday, 21 January 2013

The future's so bright I've got to wear VR goggles which help me empathise.

If your kids already spend eight hours a day online, the future depicted in Pareg and Ayesha Khanna's Hybrid Reality might ring true for you. Others may be harder to persuade.
Hybrid Reality is the short monograph which I suppose serves as flagship publication Pareg and Ayesha’s Hybrid Reality Institute, an organisation whose raison d’etre seems to be the pursuit of unfettered wishful thinking about the potential of technology. Good luck to them: dreaming up whacky visions of the future does sound like fun, and while it’s hard to see any practical application for the Fortune 500 companies the authors claim as their clients, if they’ve managed to persuade these conglomerates otherwise, happy days. Especially if in the future, everything is going to be crowd-sourced and free.

Hybrid Reality is thus an attempt to sketch out a future based on extrapolating current trends of technological development: a (thankfully slimmer) companion-piece to Ray Kurzweill’s The Singularity Is Near.

In fairness, Hybrid Reality quickly moves beyond stock platitudes about crowdsourcing, but where it does it does so without much credibility. The text is plastered with buzzwords borrowed from other disciplines and deployed with carefree abandon: 

accelerated evolution creates what we might call a Heisenbergian or quantum society: we are particles whose position, momentum and impact on others, and the impact of others on us, are perpetually uncertain due to constant technological disruptions.

Okayyy. Amongst the rhubarb there is a point to be made about rapidly disrupting technologies, but that’s not it. To the contrary, the rate of change is so fast that genuinely novel technologies and businesses have little chance to establish themselves, and that those which get a foothold do so as by fiat as sober business development, and then proceed to hammer everyone else into the ground. In such a nasty, brutish and short environment conditions favour not elegance and sophistication in design but the lowest common denominator. 

Breath-taking technologies of the sort which overflow this book, on the other hand, assume a sophistication which needs a warm and safe environment in which to incubate. Increasingly, new technologies never get the chance to be smart. It isn’t accelerated evolution that’s going on, but accelerated extinction.

In such a nasty, brutish and short environment conditions favour not elegance and sophistication in design but the lowest common denominator. 

I suppose you might expect a degree of credulity from faculty members of the “Singularity University” but, still, their vision owes as much to science fiction as it does to academic analysis and nothing at all to the traditional discipline of economics. Perhaps the dismal science, too, will succumb to the information revolution: cavalierly, Samuel Huntingdon’s maxim is reformulated so that it is not economics but technology that is “the most important source of power and wellbeing”. Older hands will recall hearing that kind of talk before, and it didn’t work out so well in 2003 when hundreds of “new economy” business models folded when it turned out they did need to generate revenue after all.

It’s easy to be a naysayer, of course, but all the same my hunch is that the Khannas’ monologue has little value for anything but excitable kite flying. Many of their assertions strongly suggest this pair really, literally, need to get out more. “Of the eight hours a day children today spend online, 1.5 involve using avatars…” they say, as if that initial premise may be taken as a given. Eight hours a day online? Which children are these, exactly? “Robots are incontestably becoming more ubiquitous, intelligent and social” and “represent an entirely new type of ‘other’ that we interact with in our social lives”. Elsewhere, “Technik”, as they put it, seems to have the power to change the laws of nature, and in the short term: “The average British citizen will likely live to be 100 years old”, they predict. Technik is so clever it can even grant us powers which we already have: In the future there will be virtual reality goggles, we are told, which can “sense other people’s stress levels”. Just imagine being able to do that.

Many of their assertions strongly suggest this pair really, literally, need to get out more. 

You can, in any case, read your fill here of all the ways the internet of things will provide an untold wealth of cool free stuff, but note the lack of any financial analysis: All this cool stuff requires effort: not just to design and conceptualise, but to manufacture, distribute, house, power, maintain and (to extent it can’t be fully computerised) operate. And effort, generally, requires money. Previous generations of technological development have shifted the labour demand curve upwards: automation has taken out repetitive, low value tasks but created more complex ones designing, building and maintaining the machinery to carry out these tasks: as a result we have grown busier with each development, not more idle - though our occupations have been more complex, challenging and rewarding. The Khannas’ brave new world would, by implication, flip that on its head.

For argument’s sake, let’s say the robots can fully take over, perform our manual labour, wipe bottoms, cure diseases and revolutionise production across all industries and agricultures so that human intervention is not required at all. Hard to see, but let’s say. Is a permanent state of situation of blissful, but chronic, total global unemployment a feasible basis for an economy?

As far as I know, man cannot live by Facebook likes alone. Last time I checked, rent wasn’t free. Nor was power, food, nor raw materials. As we go on, they’re getting harder (and costlier) to extract. So who will finance these lives of leisure? With what? Why? Who would provide services, when there was no-one to pay for them? Is it perhaps the case that personal labour, rather than being an unfortunate by-product of the “old economy” way of doing things, is in fact an immutable in the calculus of value?

Dreaming about amazing technologies which might be coming down the pike is the job of a science fiction writer. The academic question is less glamorous and more fundamental: how, within the new parameters of digital commons and in a post-growth world, can anyone devise a business model able to deliver them? These, it seems to me, are the really challenging questions, and you won’t find them addressed in this book.

Sunday, 22 January 2012

Apocalypse/Nirvana


When the universe wakes up, will it smell the coffee? 
Not everyone is as certain as Ray Kurzweil that the End of History is at hand.

L'observatoire de St-Véran by Сергей'


JULIAN JAYNES rounds out his wonderful The Origins of Consciousness in the Breakdown of the Bicameral Mind with a sanguine remark that the idea of science is rooted in the same impulse that drives religion: the desire for "the Final Answer, the One Truth, the Single Cause".

Nowhere is this impulse better illustrated, or the scientific mien so resemblant of a religious one, than in Ray Kurzweil's hymn to forthcoming technology, The Singularity Is Near. For if ever a man were committed overtly - fervently, even - to such a unitary belief, it is Ray Kurzweil. And the sceptics among our number could hardly have asked for a better example of the pitfalls, or ironies, of such an intellectual fundamentalism: one one hand, this sort of essentialism features prominently in the currently voguish denouncements of the place of religion in contemporary affairs, often being claimed as a knock-out blow to the spiritual disposition. On the other, it is too strikingly similar in its own disposition to be anything of the sort. Ray Kurzweil is every inch the millenarian, only dressed in a lab-coat and not a habit.

Kurzweil believes that the "exponentially accelerating" "advance" of technology has us well on the way to a technological and intellectual utopia/dystopia (this sort of beauty being, though Kurzweil might deny it, decidedly in the eye of the beholder) where computer science will converge on and ultimately transcend biology and, in doing so, will transport human consciousness into something quite literally cosmic. This convergence he terms the "singularity", a point at which he expects with startling certainty that the universe will "wake up", and many immutable limitations of our current sorry existence (including, he seems to say, the very laws of physics) will simply fall away.

Some, your correspondent included, might wonder whether, this being the alternative, our present existence is all that sorry in the first place.

But not Raymond Kurzweil. This author seems to be genuinely excited about a prospect which sounds rather desolate, bordering on the apocalyptic, in those aspects where it manages to transcend sounding simply absurd. Which isn't often. One thing you could not accuse Ray Kurzweil of is a lack of pluck; but there's a fine line between bravado and foolhardiness which, in his enthusiasm, he may have crossed.
“Kurzweil seems to be genuinely excited about a prospect which sounds desolate, bordering on the apocalyptic, where it manages to transcend sounding simply absurd. Which isn’t often.”
His approach to evolution is a good example. He talks frequently and modishly of the algorithmic nature of evolution, but then makes observations not quite out of the playbook, such as: "the key to an evolutionary algorithm ... is defining the problem. ... in biological evolution the overall problem has always been to survive" and "evolution increases order, which may or may not increase complexity".

But to suppose an evolutionary algorithm has "a problem it is trying to solve" - in other words, a design principle - is to emasculate its very power, namely the facility of explaining how a sophisticated phenomenon comes about *without* a design principle. Evolution works because organisms (or genes) have a capacity - not an intent - to replicate themselves. Nor, necessarily, does evolution increase order. It will tend to increase complexity, because the evolutionary algorithm, having no insight, is unable to "perceive" the structural improvements implied in a design simplification. Evolution has no way of rationalising design except by fiat. The adaptation required to replace an overly elaborate design with more effective but simpler one is, to use Richard Dawkins' expression, an implausible step back down "Mount Improbable". That's generally not how evolutionary processes work: over-engineering is legion in nature; economy of design isn't, really.

This sounds like a picky point, but it gets to the nub of Kurzweil's outlook, which is to assume that technology evolves like biological organisms do - that a laser printer, for example, is a direct evolutionary descendent of the printing press. This, I think, is to superimpose a convenient narrative over a process that is not directly analogous: a laser printer is no more a descendent of a printing press than a mammal is a descendent of a dinosaur. Successor, perhaps; descendant, no. But the "exponential increase in progress" arguments that Kurzweil repeatedly espouses depend for their validity on this distinction.

The "evolutionary process" from woodblock printing to the Gutenberg press, to lithography, to hot metal typing, to photo-typesetting, to the ink jet printer (thanks, Wikipedia!) involves what Kurzweil would call "paradigm shifts" but which a biologist might call extinctions; each new technology arrives, supplements and (usually) obliterates the existing ones, not just by doing the same job more effectively, but - and this is critical - by opening up new vistas and possibilities altogether that weren't even conceived of in the earlier technology - sometimes even at the cost of a certain flexibility inherent in the older technology. That is, development is constantly forking off in un-envisaged, unexpected directions. This plays havoc with Kurzweil's loopy idea of a perfect, upwardly arcing parabola of utopian progress.

It is what I call "perspective chauvinism" to judge former technologies by the standards and parameters set by the prevailing orthodoxy - being that of the new technology. Judged by such an arbitrary standard older technologies will, by degrees, necessarily seem more and more primitive and useless. The fallacious process of judging former technologies by subsequently imposed criteria is, in my view, the source of many of Ray Kurzweil's inevitably impressive charts of exponential progress. It isn't that we are progressing ever more quickly onward, but the place whence we have come falls exponentially further away as our technology meanders, like a perpetually deflating balloon, through design space. Our rate of progress doesn't change; our discarded technologies simply seem more and more irrelevant through time.

Kurzweil may argue that the rate of change in technology has increased, and that may be true - but I dare say a similar thing happened at the time of the agricultural revolution and again in the industrial revolution - we got from Stephenson's rocket to the diesel locomotive within 75 years; in the subsequent 97 years the train's evolution been somewhat more sedate. Eventually, the "S" curves Kurzweil mentions flatten out. They clearly aren't exponential, and pretending that an exponential parabola might emerge from a conveniently concatenated series of "S" curves seems credulous to the point of disingenuity. This extrapolation into a single "parabola of best fit" has heavy resonances of the planetary "epicycle", a famously desperate attempt of Ptolemaic astronomers to fit "misbehaving" data into what Copernicans would ultimately convince the world was a fundamentally broken model. 

If this is right, then Kurzweil's corollary assumption - that there is a technological nirvana to which we're ever more quickly headed - commits the inverse fallacy of supposing the questions we will ask in the future - when the universe "wakes up", as he puts it - will be exactly the ones we anticipate now. History would say this is a naïve, parochial, chauvinistic and false assumption. 
“Assuming there is a technological nirvana to which we’re inevitably headed is to suppose the questions we will ask when the universe “wakes up” will be same the ones we ask now. History would say this is a parochial and chauvinistic assumption.”
And that, I think, is the nub of it. One feels somewhat uneasy so disdainfully pooh-poohing a theory put together with such enthusiasm and such an energetic presentation of data (and to be sure, buried in Kurzweil's breathless prose is plenty of learning about technology which, if even half-way right, is fascinating), but that seems to be it. I suppose I am fortified by the nearby predictions made just four years ago, seeming not to have come anything like true just yet:

"By the end of this decade [i.e., by 2010] computers will disappear as distinct physical objects, with displays built in our eyeglasses and electronics woven into our clothing"

On the other hand I could find scant reference to "cloud computing" or equivalent phenomena like the Berkeley Open Infrastructure for Network Computing project which spawned schemes like SETI@home in Kurzweil's book. Now here is a rapidly evolving technological phenotype, for sure: hooking up thousands of serially processing computers into a massive parallel network, giving processing power way beyond any technology currently envisioned. It may be that this adaptation means we simply don't need to incur the mental challenge of molecular transistors and so on, since there must, at some point, be an absolute limit to miniaturisation, as we approach it the marginal utility of developing the necessary technology will swan dive just as the marginal cost ascends to the heavens; whereas the parallel network involves none of those limitations. You can always hook up yet another computer, and every one will increase performance.

I suppose it's easy to be smug as I type on my decidedly physical computer, showing no signs of being superseded with VR Goggles just yet and we're already two yeasrs into the new decade, but the point is that the evolutionary process is notoriously bad at making predictions (until, that is, the results are in!), being path-dependent as it is. You can't predict for developments that haven't yet happened. Kurzweil glosses over this shortfall at his theory's cost.


 

Friday, 20 January 2012

Vanquishing magic by sleight of hand

Douglas Hofstadter’s essay on the incompleteness of loopy logical systems starts out brightly, but makes a disappointing and strangely unimaginative resort to reductionism in the end. Pity.
Philosophy, to those who are disdainful of it, is a sucker for a priori sleights of hand: purely logical arguments which do not rely for grip on empirical reality, but purport to explain it all the same: chestnuts like “cogito ergo sum”, from which Descartes concluded a necessary distinction between a non-material soul and the rest of the world.

Douglas Hofstadter is not a philosopher (though he’s friends with one), and in I am a Strange Loop he is mightily disdainful of the discipline and its weakness for cute logical constructions. All of metaphysics is so much bunk, says Hofstadter, and he sets out to demonstrate this using the power of mathematics and in particular the fashionable power of Gödel’s incompleteness theory.

Observers may pause and reflect on an irony at once: Hofstadter’s method - derived a priori from the pure logical structure of mathematics - looks suspiciously like those tricksy metaphysical musings on which he heaps derision. As his book proceeds this irony only sharpens.

But I’m getting ahead of myself, for I started out enjoying this book immensely. Until about halfway I thought I’d award it five stars - but then found it increasingly unconvincing and glib, notably at the point where Hofstadter leaves his (absolutely fascinating) mathematical theorising behind and begins applying it. He believes that from purely logical contortion one may derive a coherent account of consciousness (a purely physical phenomenon) robust enough to bat away any philosophical objections, dualist or otherwise.

Note, with another irony, his industry here: to express the physical parameters of a material thing - a brain - in terms of purely non-material apparatus (a conceptual language). In the early stages, Professor Hofstadter brushes aside reductionist objections to his scheme which is, by definition, an emergent property of, and therefore unobservable in, the interactions of specific nerves and neurons. Yet late in his book he is at great pains to say that that same material thing cannot, by dint of the laws of physics, be pushed around by a non material thing (being a soul), and that configurations of electrons correspond directly to particular conscious states in what seems a rigorously deterministic way (Hofstadter brusquely dismisses conjectures that your red might not be the same as mine). Without warning, in his closing pages, Hofstadter seems to declare himself a behaviourist. Given the excellent and enlightening work of his early chapters, this comes as a surprise and a disappointment to say the least.

Hofstadter’s exposition of Gödel’s theory is excellent and its application in the idea of the “Strange Loop” is fascinating. He spends much of the opening chapters grounding this odd notion, which he says is the key to understanding consciousness as a non-mystical, non-dualistic, scientifically respectable and physically explicable phenomenon. His insight is to root consciousness not in the physical manifestation of the brain, but in the patterns and symbols represented within it. This, I think, is all he needs to establish to win his primary argument, namely that Artificial Intelligence is a valid proposition. But he is obliged to go on because, like Darwin’s Dangerous Idea, the Strange Loop threatens to operate like a universal acid and cut through many cherished and well-established ideas. Alas, some of these ideas seem to be ones Douglas Hofstadter is not quite ready to let go. Scientific realism, for example.

The implication of the Strange Loop, which I don’t think Hofstadter denies, is that a string of symbols, provided it is sufficiently complex (and “loopy”) can be a substrate for a consciousness. That is a Neat Idea (though I’m not persuaded it’s correct: Hofstadter’s support for it is only conceptual, and involves little more than hand-waving and appeals to open-mindedness.
“Perhaps, rather than slamming the door on mysticism, Hofstadter has unwittingly blown it wide open. After all, why stop at human consciousness as a complex system?” 
But all the same, some strange loops began to occur to me here. Perhaps rather than slamming the door on mysticism, Douglas Hofstadter has unwittingly blown it wide open. After all, why stop at human consciousness as a complex system? Conceptually, perhaps, one might be able to construct a string of symbols representing God. Would it even need a substrate? Might the fact that it is conceptually possible mean that God therefore exists?

I am being mendacious, I confess. But herein lie the dangers (or irritations) of tricksy a priori contortions. However, Professor Hofstadter shouldn’t complain: he started it.

Less provocatively, perhaps a community of interacting individuals, like a city - after all, a more complex system than a single one, QED - might also be conscious. Perhaps there are all sorts of consciousnesses which we can’t see precisely because they emerge at a more abstract level than the one we occupy.

This might seem far-fetched, but the leap of faith it requires isn’t materially bigger than the one Hofstadter explicitly requires us to make. He sees the power of Gödel’s insight being that symbolic systems of sufficient complexity (“languages” to you and me) can operate on multiple levels, and if they can be made to reference themselves, the scope for endless fractalising feedback loops is infinite. The same door that opens the way to consciousness seems to let all sorts of less appealing apparitions into the room: God, higher levels of consciousness and sentient pieces of paper bootstrap themselves into existence also.

This seems to be a Strange Loop Too Far, and as a result we find Hofstadter ultimately embracing the reductionism of which he was initially so dismissive, veering violently towards determinism and concluding with a behavioural flourish that there is no consciousness, no free will, and no alternative way of experiencing red. Ultimately he asserts a binary option: unacceptable dualism with all the fairies, spirits, spooks and logical lacunae it implies, or a pretty brutal form of determinist materialism.

There’s yet another irony in all this, for he has repeatedly scorned Bertrand Russell’s failure to see the implications of his own formal language, while apparently making a comparable failure to understand the implications of his own model. Strange Loops allow - guarantee, in fact - multiple meanings via analogy and metaphors, and provide no means of adjudicating between them. They vitiate the idea of transcendental truth which Hofstadter seems suddenly so keen on. The option isn’t binary at all: rather, it’s a silly question.

In essence, all interpretations are metaphorical; even the “literal” ones. Neuroscience, with all its gluons, neurons and so on, is just one more metaphor which we might use to understand an aspect of our world. It will tell us much about the brain, but very little about consciousness, seeing as the two operate on quite different levels of abstraction.

To the extent, therefore, that Douglas Hofstadter concludes that the self is that is an illusion his is a wholly useless conclusion. As he acknowledges, “we” are doomed to “see” the world in terms of “selves”; an a priori sleight-of-hand, no matter how cleverly constructed, which tells us that we’re wrong about that (and that we’re not actually here at all!) does us no good at all. Neurons, gluons and strange loops have their place - in many places this is a fascinating book, after all - but they won’t give us any purchase on this debate.

Thursday, 9 June 2011

Evolving Technology

Brian Arthur has trouble seeing where innovation and technological progress fit into the view that the “discovery” phase of Scientific knowledge is all over bar the shouting. 

If the evolutionary account is right, it doesn’t work like that, and we’re the better for it, he says.








B
rian Arthur’s The Nature of Technology is somewhat ponderous in its beginning (and in truth, throughout) but all the same is most encouraging in its epistemological disposition - assuming as it does the recursivity of society and technology, rather than toting the (conventional) view that one is strictly a product of the other. This points you towards a path-dependent model for not just technology, but society and indeed knowledge itself. 

But for some, this is dangerous stuff. It leads in turn to uncomfortable conclusions (at least for the neo-enlightenment brigade) which open the door to all that crazy post-modern stuff. 

Because he doesn’t have to, Arthur doesn’t go there, but he does cast a kindly glance at Thomas Kuhn. (I like people who cast kindly glances at Thomas Kuhn: these days they’re few and far between). 

Arthur doesn’t have to go there (at first) because technology, as implemented, is almost by definition infra-paradigmatic: if “science” is its philosophical principle, technology is its practical implementation - very much the sort of thing Nancy Cartwright would call a “nomological machine”: a construction designed to give a dependable result in a constrained set of circumstances, where the machine not only prescribes the parameters for a “successful” result, but constrains the environment and operating circumstances in which outcomes are generated to ensure the result is within those parameters, and then, reliably, forces that outcome. (A technology that is unable to force an outcome within its own parameters for a successful result is simply a machine that doesn’t work). 

But this leaves a gap. If technology is merely the practical implementation of “normal science”, it has a hard time explaining innovation. As Arthur puts it: 

“Combination [of existing technologies] cannot be the only mechanism behind technology’s evolution. If it were, modern technologies such as radar or magnetic resource imaging ... would be created out of bow-drills and pottery firing techniques, or whatever else we deem to have existed at the start of technological time.” 

If technology is merely the practical implementation of “normal science”, it has a hard time explaining innovation.

The problem, which Arthur specifically sets out to address, is how to account for the “onward” development of technology. Arthur is clear that it is path-dependent (“had we uncovered phenomena over historical times in a different sequence, we would have developed different technologies”) but even this insight, I think, risks undercooking the importance of the narrative conversation: it is not just that combinations of technologies through time let us further uncover existing theories and give us better and more powerful and enabling answers to our original questions; they prompt completely new questions: they afford new ways of looking at the world. New ways of looking generate new opportunities, and new problems. 

This is a significant point. 

For example: prior to the digital age, categorisation of information was a difficult and inherently limited (and, actually, biased) thing: the physical nature of information storage (books) dictated a single taxonomy and a single hierarchy, and required commitment to a single filing taxonomy (without owning more than one copy of a book, you can’t file it in two places). Digitisation changed that forever: the Dewey Decimal system - brilliant in its design though it undoubtedly was - solves a problem we no longer have, but at the cost of forcing our hand in a way we no longer need. Digital technology has enabled us to entirely re-evaluate what information really is. 

As he goes on, Arthur explicitly keeps in mind two “side issues” that constantly recur in writings about technology: the analogy to Darwin’s program of evolution, on one hand, and the analogy to Kuhn’s theories of scientific revolution on the other. But these are, to my mind, different articulations of the same idea: that “questions” and “answers” (whether you characterise these as “environmental features” and “biological adaptations which evolve to deal with them”, or “observational conundrums” and “scientific theories which purport to explain them”) are, to a large extent, interdependent: something is only a conundrum if it appears to contradict the prevailing group of theories. What both Darwin and Kuhn suggest is that “linear progress” - insofar as it implies a predetermined goal to which an evolutionary algorithm is progressing - is a misconceived idea. Evolutionary development is better characterised as a move away from the status quo, rather than a move towards anything (in hindsight, both will seem the same; to confuse them is a fundamental error).

Yet, and while Arthur clearly recognises this, he does continue to frame his explanatory theory in terms of “forward progress”, as if that is the “conundrum” to be solved. The thing is, even our traditional conception of it has this the wrong way round: “the invention of the jet engine” wasn’t what was going on; it was “the invention of a way to fly in thinner air”. The jet engine was the first solution arrived at that met that purpose (as, in a totally different context, Richard Susskind elegantly points out, when you shop for a Black & Decker, it isn’t a drill you want; it’s a hole). Technology (and science, and biology) isn’t an end, it’s a means. The more means you have, the more ends are available to you. 
The implications of this are striking. They completely undermine the idea of technology as a “forwardly moving” phenomenon. 

I had therefore wondered whether Arthur had missed a trick in his account of technology - the fact that any novel solution to an old problem creates new questions that we did not think - or need - to ask previously. But as his book closes and he views technology through the prism of the economy (on his theory the two are independent; the former is not merely the handmaiden of the latter), he nails this, too: 

“The coming of novel technologies does not just disrupt the status quo by finding new combinations that are better versions of the goods and methods we use. It sets up a train of technological accommodations and of new problems, and in doing so it creates new opportunity niches that call forth fresh combinations which in turn introduce further technologies - and further problems.” 

The implications of this are striking. They completely undermine the idea of technology as a “forwardly moving” phenomenon. It recalibrates to our changing needs and perceptions, just as we recalibrate to the changing perspectives and vistas it affords us. That is a million miles away from Ray Kurzweil’s carefully plotted (and in your correspondent’s opinion, absurd) logarithmic charts of technological progress that will see machines - and, on Kurzweil’s account, eventually the cosmos itself - “wake up”. 

Even if there were no other reasons (and there are many), one reason for favouring Arthur’s less ambitious (but actually more radical) view is its humanity. Arthur closes the book with a neat bit of lit crit: the forces of good and evil in Star Wars, he observes, can be differentiated by their relationship with technology: the Empire’s clinical, cold, efficient, androidal heartlessness - against the temperamental, jury rigged, cantankerous and fallible technology of the rebels: in one case technology is our weapon: it relies on us, on our skill, on our judgment and our humanity: we are the necessary homunculus; in the other the humans are, more or less, the “necessary evil” - the impediment to the technology achieving its ends.
Recognising that the special sauce in technology is, for the time being at least, the bit supplied by the meatware, is a comforting thought.