Inner Visions



Further to last night's post. We are not born with the ability to 'see' as we imagine it from the standpoint of a fully-grown human, and despite the exquisite construction of our two organs of sight, they are optically fairly rudimentary, consisting of a sole - albeit focusable - [optically] simple lens, an attendant variable aperture to compensate for changes in light level, and a hemispherical 'focussing screen' onto which the light entering the eye is focussed. A sort of highly developed biological camera obscura, if you will: a [spherical in this case] box or room with a small aperture equipped with a lens with which to realise an image of the outside world on the screen behind.

So far, so simple, even given the millions of years of evolution that led to the development of our eyes from the prototype cluster of light-sensitive cells in a shallow depression in the skins of creatures long since extinct. But the fact of the matter is that we have to learn to see. And we learn to see over time and with the aid of our emerging ability to move and engage with the world through the rest of our senses, creating the reality of our surroundings and building up our stored picture of the world as we grow. We don't see either completely or in true real time, either. The physical neurological signal paths have inbuilt delays inherent in the electrochemical nature of their modus operandi.

For instance, we react to sounds faster than light, despite the orders of magnitude difference in the speeds of light and sound, because the neural paths that transmit the data are shorter for sound than light: in the order of about 30 milliseconds difference in transmission time. This obviously only applies to the stimulating phenomena being local to us as individuals experiencing them. In fact all of the different senses we have transmit data to various areas of the brain at differing rates, and yet - neurological damage notwithstanding - we experience the outside world as a contiguous realtime whole, despite our brains having stitched together all the [buffered] sensory input and removed each of the delays so that the sum total of our experience slots together meaningfully. Which of course, to paraphrase Jethro Tull [ the band], means we are all actually living in the past.

In this, the whole system of human perception can be likened to any and all of the computing platforms we have ourselves devised and developed, in that it strips away the limitations imposed by the laws of physics by the use of timing to compensate for the inherent lag of each part of the system, buffering the data and then coordinating it into some form of cogent whole that can practically be used to navigate the world around us. The neural pathway from the eyes to the visual cortex of the brain passes through the thalamus, which is a kind of network switch/comparator. It's interesting to note that the enormous number of connections between the thalamus and the visual cortex is outnumbered by an order of magnitude by the number of return connections from the visual cortex to the thalamus! The process of 'seeing' is less like a closed circuit video camera and more akin to a very quick, near real time, image capture, editing suite and archiving system, all operating continuously in one glorious feedback loop, with the thalamus checksumming the sensory input with the visual cortex's expectations of that input, based on a lifetime's stored library of images.

Two examples. I suffer from occasional migraines, but I'm lucky that I don't experience the skull-crusher headaches that most sufferers do: at worst I'll get a mild headache as a result. But what I do experience is the visual disturbance that invariably goes with a migraine. In my case, in addition to the bizarre, shiny spinning 'halo', I also get another commonly-reported effect: the inability to see faces correctly: they appear to me to be missing the lower left side of the head [their left, to my right, so to speak], rendering a kind of disturbing partial skull effect. This only affects faces, either in my immediate environment, on screen or in picture form. It's thankfully very transitory, but I can only liken it to a scrambling of data, where the checksumming thalamus has missed a beat in timing somewhere and is giving my visual cortex a false picture based on an incomplete dataset that hasn't been properly compensated for in that checksumming process between the input buffer and the data interpretation centre. Like many forms of tinnitus in the hearing realm [which I also have], this happens in the brain itself, not in the organs of sensory input.

The second example is the case of someone held in complete sensory isolation. The numerously-reported and studied cases of prisoners held in true solitary confinement testify to a diametrically-opposed phenomenon. I was reading today of the case of one Robert Luke; an armed robber imprisoned in [the now former penal facility] Alcatraz in San Francisco Bay. He, as so many others who infringed the rules of his incarceration there did, was sent to the Hole; as complete a sensory isolation chamber as could be imagined. No light, no sound other than generated by oneself: cut off from the normal almost absolutely. Luke served twenty-nine days in the Hole. Like many others, Luke's brain simply started creating visual experiences for him from the stored visual data in his memory. Despite having zero visual input, Luke's brain continued to see. Like the internet's convoluted data transmission regime, the sensory whole that we think we inhabit is indeed complex and  about as far from a simple point-to-point system as one can envisage. We do indeed live in Plato's Cave, and the seeing/knowing axis is far, far from binary or one-directional. 

Comments

Popular posts from this blog

Of Feedback & Wobbles

A Time of Connection

Sister Ray