(3/power in labyrinth -- LH/sunwise move)
In which we use the obvious way to take control of what's going on -- but somehow succeed only in making matters worse
When faced with something new -- particularly something we want to control, like a dart or a juggling-ball -- we take the obvious route: we analyse it.
We observe; we try to make sense of what we see. And we go looking for patterns, so that we can say "It does that because...". More specifically, we go looking for patterns that seem to repeat in time: How does it work? What makes it do that? What would it do if it was heavier? What happens if we put feathers on it? And so on, and on, and on... always trying to break what we see into smaller and smaller patterns so that we can make some kind of sense out of them. If we find a pattern that repeats predictably enough, we call it a 'law' of cause and effect -- the earlier part of the pattern being called 'cause' and the later part being called 'effect'. And we then build up whole chains of these laws -- chains of cause and effect -- so as to build up a complete model and description of the processes with which we're working: one that explains, to our satisfaction, what seems to be going on, and which should give us a reasonable degree of control (predictability, in other words) over the sequence of events.
Think about juggling, for example. We know the forces involved: your hand pushing the ball up, and gravity pulling it back down. It's easy enough to calculate the ideal trajectory the ball should follow; easy enough to calculate the exact moment when you should throw the next ball up into the air; easy enough to calculate the timing relationships needed to get your hands into a smooth rhythm of throw and catch, throw and catch. In principle, that's all we need to know.
Or think about the inner clock. For it to point out the time, your hand moves it; the hand is moved by muscles, the muscles are triggered by nerves, the nervous impulse triggered by a reflex response of some kind. That much is easy to determine: beyond the reflex it's a little difficult to determine exactly what's going on, since it's clear that there's no exact physiological analogue of a clock within us. We could try out a few experiments, to search for a vestigial magnetic sense like that of homing pigeons; but none of the simple mechanisms, like heartbeat, would work, since they vary a fair amount over time.
But strangely enough, all this analysis doesn't seem to help us. We can see, without question, what the perfect trajectory of a juggling ball should be, what the ideal timing of the hand movements should be: but despite knowing this, we still can't do it. Knowing that the inner clock is some kind of reflex response doesn't help much in making it reliable -- or even getting it to work at all.
Somehow, it seems that analysis works better for machines than for people. By comparison, it's relatively simple to teach a computer to 'juggle' with the equivalent patterns of electrons on the screen; almost easy to teach a robot to swap balls from one mechanical arm to another. And it's commonplace to fit a machine with an electronic clock, perhaps backed up by a traditional mechanical device or even a radio link to a standard time signal. And yet, when something goes wrong -- if the robot's arm gets jogged or the clock fails -- we're the ones who have to show the machines what to do next. Somehow, we can deal with the vagaries of the real world when the machines can't.
At the surface, analysis ought to be all we need -- especially if we can find a way to re-apply it back in the real world, as applied science. In addition to our own observations, we use other people's hard work from the past, codified into what we now call 'laws of nature', to make predictions -- usually very good predictions -- about what should happen to most events in the present and (at least when relating to technology) in the near future. Once we know what will happen, we can control how they happen.
It seems obvious that that's the way it ought to work. But somehow, it doesn't quite work well enough. It works better on machines than it does with people. It works most of the time: but not all of the time. Occasionally -- especially when we're certain we have something completely under control -- there's a subtle twist: and without warning, our control fails miserably, often with disastrous results. We're evidently getting caught in some obvious mistake. It happens in every skill, in every technology, regardless of how good our analysis may appear to be. But what is it that goes wrong? And why do we fall for it, again and again and again?
Analysing analysisTo make sense of what is -- and isn't -- going on, we need to apply analysis to analysis itself: to look closer what it is, and isn't. And in the process we discover that, despite what we might hope, analysis does have severe limitations when applied in the real world. We can't rely on analysis alone when dealing with reality.
So -- back to first principles: what is analysis?
In essence, it's a way of isolating out segments of reality, breaking down patterns that we perceive into smaller parts that we can handle. We make repeated observations of repeating patterns and then, in what's called the 'hypothetico-deductive method', we try to build some kind of model -- a resolvable, and preferably linear, equation or set of equations, for example -- to describe the changes in those patterns of events. On the basis of that model, we try to reconstruct the same pattern of events: from the assumptions of the model, we deduce the likely performance. "Given this... and this... and this... then that must be so" -- that's the principle. If we get the same pattern consistently, we consider that we've found some kind of rule by which reality operates. Knowing that rule, we then have a means of predicting the progression of events, so that we can control that segment of the overall pattern.
To apply the knowledge we gain through analysis, we assume that reality is only the sum of its parts: all we have to do to control some aspect of real world is put them back together again using the same rules, but this time under our control. If we don't get the results we want, clearly our analysis was not complete enough, so we need to slice reality into even smaller parts -- a process that has now gone way beyond the atom, the original 'indivisible particle' of reality.
In order for analysis to be used in this way, we have to make a few other assumptions -- all of which are central to the common conception of science. For example, it's typically stated that there are in effect only four forces in the universe -- weak nuclear, strong nuclear, gravitational and electromagnetic -- and that all interactions between them must have a demonstrable causal connection. Since everything is causal, the whole of reality must be reducible to single consistent system of order. Simple systems follow simple rules; complex behaviour implies complex causes, even perhaps including random 'noise', but always still ultimately causal; and different systems behave differently in a qualitative way, requiring us to develop different specialisations to study them. Although complex phenomena have complex causes, small influences -- noise -- can be all but ignored: close approximations are good enough for all practical purposes. And where apparent randomness occurs in some phenomena -- some aspects of weather, for example -- it can be replaced in the equations by an average or some other statistical value.
All of these are assumptions, but they all seem reasonable enough -- they're intuitive, in the sense of 'obvious', of 'common sense'. If we accept them as such, then it's also obvious that everything we deal with in the real world is either already predictable, or soon will be if we put a bit more effort and perhaps a bit more computer power into what should be the final analysis of the last few pieces of the puzzle. A few scientists, particularly in areas such as nuclear physics and cosmology, have even talked about what seems to them to be the fast-approaching completion of the great enterprise of science -- such as the cosmologist Stephen Hawking, in his lecture "Is The End In Sight For Theoretical Physics?":
We already know the theoretical laws that govern everything we experience in everyday life... It is a tribute to how far we have come in theoretical physics that it now takes enormous machines and a great deal of money to perform an experiment whose outcome we cannot predict. 
Hence a common worldview we could call scientism, a public version of science which is actively promoted at almost every level of our culture from kindergarten onwards. To scientism, 'things have to be seen to be believed': so science explains everything for us. In fact science already has explained everything: everything is reduced to a single consistent system of order, we now know how everything works. Science is packaged rationality: objective, impartial, unchanging. (And it defends us from the uncertainties of subjectivity and irrationality: if it isn't explained by science, it either doesn't exist, or we can ignore it, or both.)
We can relax, we don't need to think any more, it's all in the textbooks. All we have to do is apply the various laws of this-that-and-the-other that science has given us, and we'll be in total control. Things only ever go wrong because we don't analyse them properly. A few intuitive people can take a few short cuts -- intuition is just fast analysis -- but with the level of understanding that science now gives us, we can teach any idiot to do just about anything.
The credo of scientism: All the laws are known: so analysis is all we need
We know the laws of nature: all we need after that is analysis, to reduce the problems we face to manageable proportions, and then build them back up again under our control. Control is what we're after: and that, with the help of these absolute laws of nature, is what we get.
That, at least, is what we're taught in school; that's the basis of almost all formal education in science and technology. So it's more than a little unfortunate that, for most practical purposes, scientism's worldview is disastrously, dangerously wrong. We need to dispose of the limitations of scientism, and the subtle shackles that it places on our thinking, before we can move on in the labyrinth.
Scientism is not scienceScientism is, in essence, an attitude: that the universe is ordered, that everything that happens within it is deterministic in a way that we can understand only through formal rationality, and that anything that deviates from that order is wrong -- we could almost call it evil. This attitude is not so much based on science as on a system of belief -- almost of religious faith -- whose roots go back to a muddled mixture of nineteenth-century materialism and mediaeval theology. Nineteenth-century politics depended on a worldview of a clockwork, Newtonian universe, everything neatly layered in hierarchies of order, its purposes always equated with those of God himself. Science's laws assumed the infallibility of Biblical law: in science as in theology, the ultimate quest was for logical consistency, the search for God's own plan. The assumption that the universe must have a logical plan to it was never questioned, but accepted as a matter of faith -- the only permissible form of intuition, here in the sense of 'received truth'. In fact to question any of the assumptions of the period would have been considered a matter of heresy.
It's in that mode that scientism has stayed: a safe, comfortable, certain worldview, with all the authority of God-given law. Science, however, has moved on: but as a result it is now, quite literally, in a state of chaos. It's important to understand that every one of those assumptions on which it's historically been based, as described earlier, have turned out either to be false, or incomplete, or subject to 'infinite regress', endlessly referring back to themselves. Courtesy of three major conceptual shifts in science -- relativity, quantum physics and chaos theory -- the whole concept of law and order in the universe has come apart at the seams.
As an example, look at one simple question that dominated eighteenth- and nineteenth-century science:
Is light waves or particles?
For the Newtonian concept of order to hold, it had to be one or the other: they're mutually exclusive, since it's logically impossible to have a wave of one particle. Quantum physics provided a way to resolve the question with the answer "Yes": both -- therefore, strictly speaking, neither. But in the process, it introduced paradox into the closed system of scientism's logic: thus, logically speaking, collapsing its claim to absolute consistency. Since the inception of relativity and quantum physics, almost a century ago, any belief in a strictly deterministic worldview has been just that -- a belief, a matter of faith, no longer science.
More recently, chaos theory has demolished what little was left. The traditional concept of order depended on laws that could be resolved with simple equations, but that has now gone: far from being the norm, laws with equations that can actually be resolved in a linear fashion -- or even resolved at all -- turn out to be the exceptions. Simple systems can be infinitely complex; complex systems can be driven by simple yet impossibly strange, predictably unpredictable 'attractors' that are stable in their chaos; and many real-world systems, such as weather, are infinitely sensitive to noise, its effects cascading upward in ways we can never predict.
There is still order, of a kind: but it's very different from the old deterministic model. As Douglas Hofstadter put it, "it turns out that an eerie type of chaos can lurk just behind a facade of order -- and yet, deep inside the chaos lurks an even eerier type of order". Faced with an infinity of causes, nothing can ever be truly predictable; and all cause-effect chains ultimately end with something we merely label -- such as 'gravity' or 'magnetism' -- rather describe. Under those conditions, the simplistic concept of causality promoted by scientism is more likely to be a dangerous hindrance than a help in working in the real world.
Hawking's bald statement that "we already know the theoretical laws that govern everything we experience in everyday life" turns out to be hopelessly optimistic, even arrogant. It might just be true at the level of particle physics, but by the time we reach the scale we actually experience in everyday life, things are very different. Predicting the behaviour of a handful of Hawking's elementary particles is almost trivial compared to analysing the exact antics of the millions of molecules in a perfectly ordinary drop of water -- even the most powerful of present-day supercomputers can't even begin to cope with the task. We can no longer rely on the short-cut of calculating a crude statistical average: as chaos theory makes clear, the behaviour of the whole is infinitely sensitive to the behaviour of every single part.
Science, with its belated acceptance of the implications of chaos, has finally rejoined the real world. Nothing is certain (except, as the old phrase goes, death and taxes), nothing ever truly repeats -- yet the scientific concept of law depends on certainty and repeatability to be able to make any kind of precise prediction. So nothing is truly predictable in an absolute sense: and no amount of analysis, no amount of computing power, no amount of expenditure on Hawking's 'enormous machines', is likely to change that. But because of what we're taught and the way that we're taught, we expect the laws to be absolute laws: and the confusion causes us endless trouble. Being stuck, unable to get something to work the way we want it to, isn't a state that an evil world inflicts on us: it's something we inflict on ourselves, with our own expectations of certainty.
At first sight it's perhaps a little hard to accept. But it happens to be true, as any practising engineer, however grudgingly, would have to admit. What we call, for convenience, 'laws' are only abstractions from reality, not the real thing: conceptual tools that are summaries, simplifications, shorthand guidelines that can describe much, but never all, of what are always immensely complex processes. There can sometimes be vast differences between the expectations of theory and the realities of practice: and we have to be able to tell the difference. For example, as Jim Williams, an electronics designer, put it in an article on computer-aided design tools, titled "Should Ohm's Law Be Repealed?":
Don't confuse a tool, even a very good one, with knowledge... When you substitute faith in an instrument, no matter how good it is, for your judgement, you're in trouble. [The claims made for CAD tools] are arrogant because in their determination to streamline technology they simplify it, and Mother Nature loves to throw a surprise party.
In essence, scientism uses faith in what purport to be absolute laws as a substitute for personal judgement. If the word of science is law, then no judgements need to be made: it's all pre-ordained. Faith can be useful: we need faith in ourselves and our abilities if we're to learn any new skill. But it's used as a way of avoiding the whole messy business of having to develop personal judgement.
Having fixed rules makes what should be education -- literally the 'out-leading' of new skills -- a simple crude matter of training. True education is hard, since it's different for everyone; by comparison, training is easy -- you don't need to think, in fact shouldn't think, but just follow the rules. That's precisely why schools and colleges promote scientism's attitude, since it makes their job much easier: easy to teach, easy to examine, all with authority of God-given truth.
Another aspect is that the scientific tradition has always taken rather too literally Plato's statement that "reality cannot be invented, but only discovered through pure reason": so anything other than the strictest reasoning would be considered suspect, subjective, a serious failing -- certainly not publishable. In paper after paper and textbook after textbook, we've been shown what the authors have discovered -- or more accurately, what they think they've found -- but not how they found it. This comment, written by the physicist Hermann von Helmholtz at the end of the nineteenth century, is particularly revealing:
I am fain to compare myself with a wanderer on the mountains who, not knowing the path, climbs slowly and painfully upwards and often has to retrace his steps before he can go further -- then, whether by taking thought or from luck, discovers a new track that leads him on a little [until] at length when he reaches the summit he founds to his shame that there is a royal road, by which he might have ascended, had he only the wits to find the right approach to it. In my works, I naturally said nothing about my mistake, but only described the made track by which he may now reach the same heights without difficulty.
The result looks much neater in the textbook; it also keeps academics happy, because it gives them a tidy logic to play with. But when we're looking, we don't have "the wits to find the right approach": that's the whole problem. The result of this habitual dishonesty -- "I naturally said nothing about my mistake" -- is that hardly any of the textbooks are much use for showing us the real-world process of discovery. All we find in them is analysis, structure, logic: and not what we actually need to understand, which is the processes that can lead us to discover a new track -- "whether by taking thought or from luck". In other words, as Jim Williams wrote, later in that article above, what we actually need to know about are "the judgemental, inspirational, and even accidental processes that constitute much of engineering". To develop that awareness, we need to know the actual paths -- or, more to the point, the process -- by which people arrived at their discoveries: not merely the sanitised version presented for public consumption.
Another problem is that this 'scientific style' gives us a completely backwards view of how our understanding of the world develops. Helmholtz could only find his 'royal road' by looking back from where he found himself: but he then presents it as something obvious, a simplified theory which allows "[the reader] to reach the same heights without difficulty". This leads us straight to scientism's basic principle of 'theory first, then practice': everything has to fit the theory, because 'reality can... only be discovered through pure reason'. Practice in technology derives from theory: technology is science applied, theory applied. So all developments in technology, we're told, derive only from the outward expansion of knowledge, the progress of science, using analysis alone to fill in the gaps in the laws that govern every interaction of reality. We're shown a model something like this:
[[RESERVE 6 lines]]
[[CAPTION 'The progress of science': outward expansion of analysis]]
According to this model, we always reach outward with reason: pure reason alone tells us which trails are dead-ends, and which ones will develop further. But even in science this simply isn't true: Helmholtz admits above that the process is anything but straightforward, dependent on luck as much as anything else. And Beveridge, in The Art of Scientific Investigation, is rather more blunt:
The origin of discoveries is beyond the reach of reason. The role of reason in research is not hitting on discoveries -- either factual or theoretical -- but verifying, interpreting and developing them and building a general theoretical scheme. Most biological "facts" and theories are only true under certain conditions and our knowledge is so incomplete that at best we can only reason on probabilities and possibilities. 
Science -- literally 'knowledge' -- is concerned with invariant truth: that which can be shown to be constant, which we place the general category of 'facts'. But analysis (science traditionally equates the term with reason) can only demonstrate an item of knowledge to be 'fact' by linking it to something else that's known (or at least assumed) to be true. In "building a general theoretical scheme", all these references are linked backwards in a chain to certain fundamental assumptions, the foundations of the science. That's why science's analysis is so good at hindsight -- but it's but little or no use in telling us what to do in the real world's uncertain conditions, when we don't know anything more than "probabilities and possibilities".
So science and technology actually progress by a very different model than scientism teaches us to expect. Each forward move is made intuitively, by something quite other than analysis -- "the origin of discoveries is beyond the reach of reason" -- and only then do we apply analysis, looking backwards, "verifying, interpreting and developing a general theoretical scheme":
[[RESERVE 6 lines]]
[[CAPTION 'The progress of science': intuitive jump, analytic 'backfill']]
If we can only look backwards, insisting on analysis as the only form of reason, it's evident that we can never make any true progress at all. We can't learn anything: all we can do is look -- though perhaps in greater detail -- at where we've already been. That's how we get stuck -- round and round in the same old logical loops, trapped by our assumptions and expectations.
The hardest assumptions to spot are those that everyone seems to agree are 'intuitively obvious'. For example, it seems 'obvious' and 'intuitive' -- in the sense of 'habitual use' -- to sort an index alphabetically: but it can fail all too easily. I know this only too well, having spent a fruitless half-hour searching for 'oedema' in the MeSH (Medical Subject Headings) computer index, only to find it under indexed under 'E': the system was indexed on American spellings, not British (the American spelling of the term is 'edema'). As we saw with scientism, this kind of assumption is 'intuitive' in the wrong sense: spelling is a matter of agreement, not one of pre-ordained certainty.
In particular, we tend to assume other people know what we mean when we say something: we know what we mean, so the meaning must be obvious. But often it's not obvious at all, as one old story shows:
The professor of chemistry asked his newly-employed janitor to watch an experiment while he went out to make a telephone call. So the boy watched it. And watched it. He did what he was told: he had no idea what the experiment was supposed to do, so he just watched it. And when the professor returned, the boy said "That was fascinating -- especially when the equipment melted. But... er... was it supposed to do that?"
Like scientific law, language has no pre-ordained meaning: much of the meaning comes from the subtleties of context, which we have to interpret. Without a proper context, it's far too easy to mis-interpret it, with unfortunate results:
Watch a kettle. But don't let it melt.
If you use computers, you'll know only too well that they take every instruction literally, just like that boy. "Do what I mean -- not what I say!" is a cry that's often heard among computer programmers. But a computer can't know what you mean: at best it can only interpret what you say. So you can't blame the computer for getting things wrong: it has no choice, it can only do exactly what its' told to do -- nothing more, nothing less. It's a logic-machine, the epitome of everything that analysis stands for: all the advantages and disadvantages of the sequence-following analytic mode of thought, neatly packaged into a single device.
Computers work on patterns of logic, following sequences of rules with mind-boggling speed and heart-wrenching stupidity. We're so conditioned to believe that analysis is thought, that it's easy to forget that all it can do is follow predertmined rules. Someone has to work out what rules are appropriate: a quite different matter, requiring a skill in merging analysis with imagination and awareness. Any computer program involves layer within layer of rules and structures, the product of the skills of many people: and in practice, just how well a program 'works' is defined in terms of how well all those people's assumptions about reality do actually match up with the requirements of the real world.
A single mistaken assumption can stop an entire project: and can be extremely difficult to find. A few years ago I was called in to help disentangle the program of a misbehaving robot truck, in a toothpaste factory somewhere in the middle of America. The truck could move round the factory without any problems, all according to its instructions: but each time it picked up a pallet from one of the loading bays, it would refuse to move on, bleating its horn in alarm. It was clearly a program fault, but it took us several days to find it. In the original program, one engineer had made the reasonable assumption that a switch meant 'on is loaded'; another programmer, working separately, had assumed that, for 'fail-safe' reasons, the same switch should be read as 'off is loaded' -- and recorded the state before loading, but only checked after the pallet had been loaded. The result was that the machine could pick up its load, but then found it had an ambiguous interpretation of its world, in effect saying "But I already had a pallet on-board". Which it didn't, of course: but on the basis of its instructions, it had no option but to come to a grinding halt. It was stuck on a stupid dilemma: and no amount of logic alone could resolve it. When we're stuck, we need a different type of awareness -- "beyond the reach of reason" -- to see our way out of the situation.
The idiot robotRobots are stupid: by their very nature they have no choice but to be so. Much has been made of 'artificial intelligence', but in practice true intelligence, one that could resolve automatically the kind of situation that truck found itself in, is still a long way off, awaiting a totally different type of computer design that can mimick both modes of the mind. A robot, and the computer that drives it, is no more than a very fast idiot: the only thing that makes it seem intelligent is the blinding speed with which it follow rules in patterns of logic -- in many cases interpreting millions of instructions every second -- and scientism's insistence that analysis is the only mode of intelligence there is.
The computer can interpret millions of instructions a second: but it takes a surprising number of instruction for a computer to do anything useful, since each instruction is amazingly trivial. And they can easily be the wrong ones -- the computer itself has no way of knowing which assumptions are valid in the real world, and which ones are not. All it can do is follow instructions, however stupid. For an analogy of the problem, look at a game called 'Robot Teamaker' that's sometimes used in computer education groups:
Ask a robot to make a cup of tea. Or would it be quicker to do it yourself?
It's probably easier to teach a robot to juggle than it is to get it to make a cup of tea on its own: but when we're using people to simulate the process, a cup of tea is easier -- most people can make tea more easily than they can juggle! The idea of the game is that one of the group pretends to be a robot; the others have to tell it what to do to make a cup of tea. (Interestingly, it takes a great deal of a specific type of intelligence to 'act stupid': experience has shown that the usual best candidate for 'robot' would be a very bright ten- to twelve-year-old boy.) At the beginning of the game, the robot is sitting down in a chair, waiting for instructions. It interprets all instructions literally: if an instruction is incomplete or ambiguous, it either stops (saying "Bzzz", to indicate 'malfunction') or interprets it as best it can.
It's a fascinating, if often frustrating, way to discover just how much of what we would assume to be 'obvious' -- scientism style -- turns out to be nothing of the kind. A typical session starts by informing the robot of a few basic functions that it has -- the robot does know what its eyes and arms and legs are for -- and thereafter might go something like this:
Command: Get up.
Command: What's wrong?
Robot: I don't know what 'up' is.
Command: (demonstrates up, and also down, left, right, and the like) Now get up.
Robot: (moves right arm around in the air) Bzzz.
Command: What's wrong now?
Robot: 'Get' means 'take', doesn't it? I don't know how to get an 'up'.
Command: All right, stand up.
Robot: (stands up)
Command: Go to the kitchen.
Robot: Bzzz. Which way do I go to move to the kitchen? You haven't told me.
Command: Go one step forward.
Robot: Bzzz. Which leg do I make the step with?
Command: Move your left leg forward.
Robot: (appears to do nothing)
Command: That's not fair, I told you to move!
Robot: I did move! You didn't tell me how much to move my leg, so I moved it forward by one millimetre.
Command: All right -- move your left leg forward by sixty centimetres.
And so it goes on, slowly bringing out layer after layer of hidden assumptions and ambiguities in the way we phrase instructions.
Notice too that our robot could do very little on its own: the people giving the commands had to do all its actual thinking for it. It could only work on its own, unguided, when it was working within a set of consistent instructions -- given that we'd told it that a 'step' was a move of sixty centimetres, we could tell it to move a specific number of steps, using that as part of a 'sub-routine' to travel a specific distance. But computers are idiotic -- stupid, not 'fool-ish': a fool has no expectations at all, but if a robot finds something it doesn't expect, it has no idea what to do with it. All it could do when it met up with something it didn't understand was either to make some kind of guess based on past experience (about the best we can do with 'artificial intelligence' at the moment) or else, like the robot truck earlier, come to a grinding halt.
The robot's stupidity does have its advantages. You know it will do exactly what you tell it to do: no more, no less. Give the robot a pair of mechanical arms that can be moved precise amounts at precise speeds and accelerations, according to computer control; then give it a suitable set of equations to calculate the appropriate movements for those arms, and you can teach a robot to juggle. This will only work under 'controlled' conditions -- chaos theory makes that quite clear -- but it will probably be quicker and easier than teaching yourself to juggle. Which is rather strange.
It gets even stranger. The more we analyse the movements of the robot juggler's arms, the better we can refine our control. With a great deal of refinement, and few revisions of the program, we might even be able to get it to keep four or five balls in the air at the same time. But the more we try to analyse what we do in juggling, the worse it gets. It does help to know what the ideal trajectory of the ball should be: but if I start watching every movement of my own hands in order to try to match that ideal throw, I'm lucky if I remember even to let go of the ball, let alone keep three of them in the air. For us, analysis slows everything down -- yet juggling is utterly dependent on speed and timing.
The same is true of the inner clock -- especially the alarm clock variation. I have no idea what allows me to wake up at a specific pre-set time: but if I try to analyse it, I can't even get to sleep!
There's an important difference, though, between the robot and ourselves. The robot is a mechanical creation, a product of analytical thinking and a perfect object for it: never more nor less than the sum of its parts. We're not. We change, constantly; we're never quite predictable, never quite analysable, because the sum of the parts never adds up to a simple total. We cannot, for example, take individual muscle fibres out of an arm, add a few extras and put them back together, in a robot-like way, without destroying the workings of the arm itself. And unlike the robot, which we can treat as something external to us -- an object -- our arms are part of us. What makes it so hard to learn to juggle, or to make sense of our inner clock, is that we can't just reduce the processes to a set of simple objective equations: we're part of the overall equation.
To teach ourselves a new skill, we can't just teach the way we can with an object. We're both object and subject of the skill: to put it into practice, we have to understand ourselves, our own inner workings. To do that, we have to learn how to teach within -- the literal meaning of 'in-tuition'. And that means, for a while at least, we have to go the other way from the outward-looking mode of analysis -- to look inward instead, with ourselves as subject, at a different way of understanding skills.
<small> Mathematician John Taylor (of 'black holes' fame), in a discussion of dowsing in an article in New Scientist. [back]
 James Gleick, Chaos, p.303. [back]
Quoted in John Boslough, Stephen Hawking's Universe, Cambridge University Press, 1980. [back]
 From James Gleick's summary in Chaos, pp.303 onward. [back]
 Douglas Hofstadter, Metamagical Themas. [back]
 Hence the so-called Butterfly Effect, in which it can be shown mathematically that the beating of a butterfly's wings in Beijing could trigger a tornado in Texas: see Chaos, Chapter 1. [back]
 See the discussion on the Mandelbrot set and other 'images of chaos' -- always self-similar, but never repeating -- in Chaos, pp.213-240. [back]
 Jim Williams: guest editorial in EDN (Electronic Design News), March 3 1988, pp.47-50. [back]
 Quoted in The Art of Scientific Investigation, p.60. [back]
 Beveridge, ibid., p.95. [back]