Monday, 21 January 2013

The future's so bright I've got to wear VR goggles which help me empathise.

If your kids already spend eight hours a day online, the future depicted in Pareg and Ayesha Khanna's Hybrid Reality might ring true for you. Others may be harder to persuade.
Hybrid Reality is the short monograph which I suppose serves as flagship publication Pareg and Ayesha’s Hybrid Reality Institute, an organisation whose raison d’etre seems to be the pursuit of unfettered wishful thinking about the potential of technology. Good luck to them: dreaming up whacky visions of the future does sound like fun, and while it’s hard to see any practical application for the Fortune 500 companies the authors claim as their clients, if they’ve managed to persuade these conglomerates otherwise, happy days. Especially if in the future, everything is going to be crowd-sourced and free.

Hybrid Reality is thus an attempt to sketch out a future based on extrapolating current trends of technological development: a (thankfully slimmer) companion-piece to Ray Kurzweill’s The Singularity Is Near.

In fairness, Hybrid Reality quickly moves beyond stock platitudes about crowdsourcing, but where it does it does so without much credibility. The text is plastered with buzzwords borrowed from other disciplines and deployed with carefree abandon: 

accelerated evolution creates what we might call a Heisenbergian or quantum society: we are particles whose position, momentum and impact on others, and the impact of others on us, are perpetually uncertain due to constant technological disruptions.

Okayyy. Amongst the rhubarb there is a point to be made about rapidly disrupting technologies, but that’s not it. To the contrary, the rate of change is so fast that genuinely novel technologies and businesses have little chance to establish themselves, and that those which get a foothold do so as by fiat as sober business development, and then proceed to hammer everyone else into the ground. In such a nasty, brutish and short environment conditions favour not elegance and sophistication in design but the lowest common denominator. 

Breath-taking technologies of the sort which overflow this book, on the other hand, assume a sophistication which needs a warm and safe environment in which to incubate. Increasingly, new technologies never get the chance to be smart. It isn’t accelerated evolution that’s going on, but accelerated extinction.

In such a nasty, brutish and short environment conditions favour not elegance and sophistication in design but the lowest common denominator. 

I suppose you might expect a degree of credulity from faculty members of the “Singularity University” but, still, their vision owes as much to science fiction as it does to academic analysis and nothing at all to the traditional discipline of economics. Perhaps the dismal science, too, will succumb to the information revolution: cavalierly, Samuel Huntingdon’s maxim is reformulated so that it is not economics but technology that is “the most important source of power and wellbeing”. Older hands will recall hearing that kind of talk before, and it didn’t work out so well in 2003 when hundreds of “new economy” business models folded when it turned out they did need to generate revenue after all.

It’s easy to be a naysayer, of course, but all the same my hunch is that the Khannas’ monologue has little value for anything but excitable kite flying. Many of their assertions strongly suggest this pair really, literally, need to get out more. “Of the eight hours a day children today spend online, 1.5 involve using avatars…” they say, as if that initial premise may be taken as a given. Eight hours a day online? Which children are these, exactly? “Robots are incontestably becoming more ubiquitous, intelligent and social” and “represent an entirely new type of ‘other’ that we interact with in our social lives”. Elsewhere, “Technik”, as they put it, seems to have the power to change the laws of nature, and in the short term: “The average British citizen will likely live to be 100 years old”, they predict. Technik is so clever it can even grant us powers which we already have: In the future there will be virtual reality goggles, we are told, which can “sense other people’s stress levels”. Just imagine being able to do that.

Many of their assertions strongly suggest this pair really, literally, need to get out more. 

You can, in any case, read your fill here of all the ways the internet of things will provide an untold wealth of cool free stuff, but note the lack of any financial analysis: All this cool stuff requires effort: not just to design and conceptualise, but to manufacture, distribute, house, power, maintain and (to extent it can’t be fully computerised) operate. And effort, generally, requires money. Previous generations of technological development have shifted the labour demand curve upwards: automation has taken out repetitive, low value tasks but created more complex ones designing, building and maintaining the machinery to carry out these tasks: as a result we have grown busier with each development, not more idle - though our occupations have been more complex, challenging and rewarding. The Khannas’ brave new world would, by implication, flip that on its head.

For argument’s sake, let’s say the robots can fully take over, perform our manual labour, wipe bottoms, cure diseases and revolutionise production across all industries and agricultures so that human intervention is not required at all. Hard to see, but let’s say. Is a permanent state of situation of blissful, but chronic, total global unemployment a feasible basis for an economy?

As far as I know, man cannot live by Facebook likes alone. Last time I checked, rent wasn’t free. Nor was power, food, nor raw materials. As we go on, they’re getting harder (and costlier) to extract. So who will finance these lives of leisure? With what? Why? Who would provide services, when there was no-one to pay for them? Is it perhaps the case that personal labour, rather than being an unfortunate by-product of the “old economy” way of doing things, is in fact an immutable in the calculus of value?

Dreaming about amazing technologies which might be coming down the pike is the job of a science fiction writer. The academic question is less glamorous and more fundamental: how, within the new parameters of digital commons and in a post-growth world, can anyone devise a business model able to deliver them? These, it seems to me, are the really challenging questions, and you won’t find them addressed in this book.

Saturday, 12 January 2013

Occam’s Razorburn


Stephen Hawking’s latest book raises far more questions than it answers. Such as, why hasn’t he been reading Thomas Kuhn, and what really is the benefit of unifying theories which don't seem to need unification?

In which we meet yet another first-class scientist who wishes to self-identify as a second-class philosopher and a comedian from the back end of steerage.

Since few will buy A Grand Design for its wit we can forgive Stephen Hawking's appalling attempts to be funny, but it's not so easy to forgive his philosophical ignorance. Certain physical scientists might be better off unacquainted with the modern philosophy of science (though those who know it possess a welcome sense of perspective and humility). But not world-renowned cosmologists. Their field continually bumps up against the boundary of what science even is (and it doesn't have a "no-boundary condition", whatever that might be).

So when Stephen Hawking claims that "philosophy has not kept up with modern science, especially physics" it suggests not only a lack of perspective and humility, but that Hawking has been skipping on some required reading.  Especially since, having written off the discipline, Hawking seems barely acquainted with it. He mentions few philosophers more recent than Rene Descartes (d. 1650). So it is hard to know who he thinks hasn't kept up.

Particularly when Hawking's first grand pronouncement is "model-dependent reality": the idea that there may be alternative ways to model the same physical situation with fundamentally different elements and concepts. "If [such different models] accurately predict the same events, one cannot be said to be more real than the other." Physics has, apparently, been forced into this gambit following recent failures to get unifying calculations to work themselves out. In any case it isn't quite the neat trick Hawking thinks it is.

Firstly, while model-dependent reality might be news to Stephen Hawking (he seems to think it the fruit of modern physics' womb) the philosophers he hasn't been reading have been talking about it for years, if not centuries, to the constant sound of scientists' excoriations. It is even part of Descartes' philosophical fabric (and, more tellingly, Darwin's, but picking a fight with modern evolutionists, while fun, is a story for another day). That is to say, it sounds like it is the physicists who are finally catching up with the philosophers and not the other way around.

Secondly, in the grand game of philosophers' football that Cosmology has become, the model-dependent reality play is something of a surrender before kick-off.  For if it is true that the same phenomenon can be plausibly accounted for in multiple, "incommensurate" (© Thomas Kuhn) ways, then the hard question is not about the truth in itself of any model, but the criteria for determining which of the (potentially infinite) models available we should choose in the first place. 

This question is not one for physics, but metaphysics. It necessarily exists outside any given model (© Paul Feyerabend). Here we meet our old friend, Occam's Razor. This isn't a scientific principle at all, but a pragmatic rule of thumb with no intellectual pedigree: all else being equal, take the simplest explanation. Occam's Razor is a favourite instrument for the torture of hapless Christians by grumpy biologists: all your tricksy afterlife wagers and so on fail because evolution is so much less complicated and has so much more explanatory value than the idea that an omniscient, intangible, invisible, omnipotent entity pulling strings we can't see to make the whole thing go.

But, alas, in seeking a grand unification of things that really aren't asking to be unified, cosmology reveals some almighty snags. Unification under Hawking's programme, if it is even possible, involves slaughtering some big old sacred cows. To name a few: causality, the conventional conception of space-time; the idea that scientific theories should be based on observable data and their outcomes testable. It bows to some truly heinous false idols too. For example: seven invisible space-time dimensions, a huge mass of invisible dark matter, an arbitrary cosmological constant, a potentially infinite array of unobservable universes which wink in and out of existence courtesy of a mathematically inferred "vacuum energy"). Hawking doesn't propose solutions to these problems, but seems to think they're a fair price for achieving grand unification.

I'm not so sure: other than intellectual bragging rights, the resulting unified theory has no obvious marginal utility. And it has political drawbacks: believing one's model to be the truth carries potentially unpleasant implications for the suppression of those who don't.

There are practical drawbacks, too. We are asked to reject existing theories, which still have quite a lot of utility, in favour of something that it infinitely harder to understand and work with. The accelerating expansion of the universe without any apparent acting force seems to violate Newton's second law of motion. Without an outrageous end-run, the first nanosecond of the Big Bang (wherein the universe is obliged to expand in size by ten squillion kilometres - i.e. far faster than the speed of light) seems to violate the fundamentals of general relativity. String theory requires seven necessarily unobservable space-time dimensions and/or entirely different universes, and even then doesn't yield a single theory but millions of the blighters, all slightly inconsistent with each other (hence the appeal to "model dependent reality).

From the camp which wielded Occam's Razor so heartily against the Christians, this seems a bit rich. If these are the options, then the razor might slice in favour of the big guy with the beard.

But these aren't the options. We could save a lot of angst, and perhaps could have avoided digging trillion dollar circular tunnel under Geneva, had we employed model dependent reality the way the philosophers saw it and not the scientists (and shouldn't we call a spade a spade and label it cognitive relativism, by the way?). Since it crossed the event horizon of observability modern cosmology has become arcane, stunt-mathematics. If there were a chance that it might deliver time-travel, hyperspace or a tool for locating wormholes to other galaxies or universes then one could see the point in this intellectual onanism. But none of that seems to be allowed. So we should therefore ask the question "but why? What's the point? What progress do you promise that we can't achieve some other way?" No one seems to be able to answer that question.

But if we park it, what's left of Stephen Hawking's latest book is some pretty ropey jokes.

Friday, 24 August 2012

Enterprise 2.0: New Collaborative Tools for Your Organization's Toughest Challenges: Andrew McAfee

Why has the Facebook revolution made nary a dent in corporate culture, and how to change that. Andrew McAfee has some not-so-starry-eyed answers.
 
I have long been entranced by the potential of the collaborative internet and have, as a result being trying my darndest to evangelise its benefits in my professional life - no small challenge, involving as it does a bunch of lawyers inhabiting the more cobwebbed crannies in the infrastructure of a bank. To that end I've set up wikis, libraries, discussion forums and sharepoint sites all, for the most part, to no avail. Old habits die hard in any circumstance, but amongst moribund lawyers they live on like zombies.

In recent times I have taken to trying to understand, or at any rate deduce, whether it is simply a challenge to the design of our particular distributed system or whether it is more a problem of the psychological configuration of the communal working environment, or some unholy, un-dead combination of the two, which renders barren my efforts. Given my current place of toil is basically one gigantic supercomputer, part human, part machine and therefore, you would think, ripe for the benefits enterprise collaboration can bring - it is frustrating to say the least to discover how immune it appears to be to those very charms.

Enterprise 2.0: New Collaborative Tools for Your Organization's Toughest ChallengesIn my studies I have consulted learned (and excellent) theoretical volumes like Lawrence Lessig's Code: Version 2.0 and Yochai Benkler's The Wealth of Networks: How Social Production Transforms Markets and Freedom, and populist ones like Chris Anderson's The Long Tail and Don Tapscott's Wikinomics: How Mass Collaboration Changes Everything, and all tell me, with varying degrees of erudition and insight, that the new world order is at hand.

Except, for all my efforts and enthusiasm, it isn't. Of the 800 odd articles in our wiki, I have personally authored, in their entirety, about 90 percent of them. I can't persuade anyone to use a discussion board but me (discussing things with myself palls after a while) and while SharePoint has been taken up with some gusto, it has invariably been done so stupidly, without thought for the collaborative opportunities it offers. Everyone sets up their own SharePoint sites, protects it like a fiefdom, and ignores all others.

I have been looking for the book that explains these challenges of the new world order and which explains how this entropy can be fought. Andrew Mcafee's Enterprise 2.0 might just be that book.

Mcafee is a believer, and a convert from a position of scepticism but, unlike (for example) Chris Anderson, he is not so starry eyed that he can't apprehend the challenges presented. Mcafee takes us through four case studies (all thrillingly on point for me!) of business executives trying, and struggling, to collaborate using existing tools. Mcafee maps these efforts (namely technological solutions) to his own sociological analysis which differentiates groups in terms of the strengths of existing ties between the individuals purporting to connect: there are strong bonds (as between direct colleagues in geographically centralised team, weaker bonds (as between fellow employees of a wider organisation) and right out at the limit, no particular bonds at all - the Wikipedia example. Different types of emergent social software platforms (ESSPs) work better for different types of community bond. Mcafee also deals with the "long haul" challenges, which acknowledges that, particularly where there is an "endowment" collaboration system to overcome (email being the most obvious), or where collaborative opportunity is "above the flow" rather than in it (i.e., collaboration is a voluntary action completed after the "compulsory" work is done), the change in behaviour will take a long time, so stick with it (encouraging stuff for this lone wiki collaborator!)

Ultimately Mcafee doesn't have the answers - nor should we expect him to - but his analysis is thoughtful, credible (as opposed to the more frequent "credulous") and optimistic - Enterprise 2.0 needs evangelists and "prime movers" who are engaged and prepared to stick with it - meaning that this is well recommended as a volume for those wanting a practical view of the enterprise benefits of social networking and Web 2.0.

Wednesday, 15 August 2012

Film Shorts: Alien (1979)

After Ridley Scott's widely anticipated, commonly disappointing Prometheus, it was interesting to go back to the original Alien, a film I remember being utterly terrified by when I first saw it.

The "Space Jockey" - and H R Giger's wondrous machine-organic phallic designs, get about ten minutes' airtime. They are not explanained. Giger's hot, steaming, streaming dirty evolution is echoed in the wet and grimy bronze interiors of the Nostromo. Once aboard the Space Ship, the film is an out-and-out thriller - there's a brief dalliance with Hal-style computer malevolence in the form of Ash, but otherwise the film has no intellectual pretensions. 

Would that were so for Prometheus, which instead makes the same mistake Matrix: Reloaded made after The Matrix: rather than simply hinting ambiguously at great profundity, letting an awestruck audience infer a grand metaphorical scheme - a magician's trick of misdirection - it tries to hit you between the eyes with it. To carry that off you do need real, well, profundity. 

Sunday, 6 May 2012

Film Shorts: Melancholia (2011)

This handsome film opens with a wondrous series of slow motion tableaux, but is bogged down quickly afterwards by a long set piece set at the most fractious wedding reception ever caught on celluloid. 

Kirsten Dunst and Charlotte Gainsbourg play polar opposite sisters, Dunst the melancholic, Gainsbourg the neurotic,and neither given to doing anything cheerily or with any great velocity. John Hurt, Keifer Sutherland and Alexander SkarsgĂĄrd lend weight but not likeability to the opening act. The second part of the movie takes an apocalyptic turn and, ironically, improves in mood, but the whole thing was immeasurably enhanced, I found, by watching at 1.5x speed on the Blu Ray player. I love how technology puts power in the hands of the viewer like that.

Monday, 23 April 2012

Worthless Aspirational Quotes: Aiming for the Moon to get to the top of the Tree

"If you aim for the moon you may get to the top of the trees," a witless motivational speaker may tell you, "but if you aim for the top of the trees, you may never get off the ground!"  Don't, in otherwords, choose realistic or sensible goals, and that way your life will work out exactly as you wish!



No. If you are aiming for the moon, and you're serious about it, you'll spend the rest of your life, frustrated and alone, in your basement. Best case scenario, you'll go and work for NASA, and even then, most likely you won't make it - not even NASA has been in 40 years. Okay, Richard Branson might be a better bet. Whatever, it won't help you get to the top of a tree.

If you want to get to the top of a tree, climb it, my son, and stop listening to motivational speakers.

You will achieve almost nothing in your life consequent on an outrageous, long-odds punt. Even those few freak Outliers that seem to have (Elvis, Bill Gates and so on) most likely weren't dreaming anything of the sort when they started out.

Instead, meander through the evolutionary design space of your life like the rest of us do, as pygmies in a field of high grass, purblindly stumbling hither and thither, attracted intuitively to what you like and repelled by what you don't. Continually set sensible, achievable goals, achieve them, and after a while you might think it isn't such a miserable existence on mother earth after all.

Friday, 3 February 2012

The End – or the Start – of Ignorance

 
E.O. Wilson is just the latest biologist to try turning the base metal of scientific induction into the spun gold of existential truth. What is the allure of religious certainty for these folks, and why they can’t heed the lessons of their own discipline?


I’ve made  the observation before that scientists - especially biologists - make lousy philosophers, and it doesn’t take long for Professor E. O. Wilson - one of evolutionary biology’s most prominent lights - to place himself squarely in that camp.

“No one should suppose,” he asserts, “that objective truth is impossible to attain, even when the most committed philosophers urge us to acknowledge that incapacity. In particular it is too early for scientists, the foot soldiers of epistemology, to yield ground so vital to their mission. ... No intellectual vision is more important and daunting than that of objective truth based on scientific understanding.”

On the other hand not long afterwards, apparently without intending the irony with which the statement overflows, he says, “People are innate romantics, they desperately need myth and dogma.”

None more so, it would seem, that philosophising evolutionary biologists. 

Wilson’s Consilience is a long essay on objective truth that - per the above quotation, gratuitously misunderstands what epistemology even is, whilst at the same time failing to mention (except in passing) any of its most important contributors - the likes of Wittgenstein, Kuhn, Quine, Rorty or even dear old Popper. Instead, Wilson characterises objections to his extreme reductionism as “leftist” thought including - and I quote - “Afrocentrism, ‘critical’ (i.e., socialist) science, deep ecology, ecofeminism, Lacanian psychoanalysis, Latourian sociology of science and neo-Marxism.”

Ad hominem derision is about the level of engagement you’ll get, and the only concession - a self-styled “salute” to the postmodernists - is “their ideas are like sparks from firework explosions that travel away in all directions, devoid of following energy, soon to wink out in the dimensionless dark. Yet a few will endure long enough to cast light on unexpected subjects.” You could formulate a more patronising disposition, I suppose, but it would take some work.

“You could formulate a more patronising disposition, I suppose, but it would take some work.”

What is extraordinary is that of all scientists, a biologist should be so insensitive to the contingency of knowledge, as this is the exact lesson evolutionary theory teaches: it’s not the perfect solution that survives, but the most effective. There is no “ideal organism”.

In support of his own case, Wilson refers at some length to the chimerical nature of consciousness (taking Daniel Dennett’s not uncontroversial account more or less as read). But there is a direct analogy here: Dennett’s model of consciousness stands in the same relation to the material brain as Wilson’s consilience stands to the physical universe. Dennett says consciousness is an illusion - a trick of the mind, if you like (and rather wilfully double-parks the difficult question “a trick on whom?”).

But by extension, could not consilience also be a trick of the mind? Things look like they’re ordered, consistent and universal because that’s how we’re wired to see them. Our evolutionary development (fully contingent and path-dependent, as even Wilson would agree) has built a sensory apparatus which filters the information in the world in a way which is ever-more effective.  That’s the clever trick of evolutionary development. If it is of adaptive benefit to apprehend “the world” as a consistent, coherent whole, then as long as that coherent whole accounts effectively for our physiologically meaningful experiences, then its relation to “the truth” is really beside the point.

When I run to catch a cricket ball on the boundary no part of my brain solves differential equations to catch it (I don’t have nearly enough information to do that), and no immutable, unseen cosmic machine calculates those equations to plot its trajectory either. Our mathematical model is a clever proxy, and we shouldn’t be blinded by its elegance or apparent accuracy (though, in point of fact, practically it isn’t that accurate) into assuming it somehow reveals an ineffable truth. This isn’t a new or especially controversial objection, by the way: this was one of David Hume’s main insights - an Enlightenment piece of enlightenment, if you will. As a matter of logic, there must be alternate ways of describing the same phenomena, and if you allow yourself to implement different rules to solve the puzzle, the set of coherent alternative solutions is infinite.

“It is extraordinary that a biologist should be so insensitive to the contingency of knowledge, it being the exact lesson of evolutionary theory.” 

So our self-congratulation at the cleverness of the model we have arrived at (and, sure, it is very clever) shouldn’t be overdone. It isn’t the “truth” - it’s an effective proxy, and there is a world of difference between the two. And there are uncomfortable consequences of taking the apparently harmless step of conflating them.

For one thing, “consilience” tends to dissuade inquiry: if we believe we have settled on an ineffable truth, then further discussion can only confuse and endanger our grip on it. It also gives us immutable grounds for arbitrating against those who hold an “incorrect” view. That is, to hold forth a theory which is inconsistent with the mainstream “consiliated” view is wasteful and given it has the potential to lead us away from the “true” path, may legitimately be suppressed.

You can see this style of reasoning being employed by two groups already: militant religious fundamentalists, and militant atheists. Neither is prepared to countenance the pluralistic, pragmatic (and blindingly obvious) view that there are not just many different *ways* of looking at the world but many different *reasons* for doing so, and each has its own satisfaction criteria. While these opposing fundamentalists go hammer and tongs against each other, their similarities are greater than their differences, and their greatest similarity is that neither fully comprehends, and as a consequence neither takes seriously, the challenge of the “postmodern” strands of thought against which they’re aligned.

Hence, someone like Wilson can have the hubris to say things like: “Yet I think it is fair to say that enough is known to justify confidence in the principle of universal rational consilience across all the natural sciences”

Try telling that to Kurt Gödel or Bertrand Russell, let alone Richard Rorty or Jacques Derrida.