May 26, 2010

-page 6-

When the Mind becomes trained and learns to concentrate and focalize on a single object or subject at a time, that state of Mind will bring the individual Universal Knowledge and Wisdom. This is how genius is created by applying the mental art of concentration and focalizing on any worthwhile subject. The famous theories and hypothesis come into being such as Einstein's theory of relativity, man's ability to fly through the air, space travel, etc., by applying the mental art of concentration. It is an unbending mental aspect of the human mind as it continues to expand and gathers ever more information about all known and unknown subjects and objects, constantly causing change and advancement in Spirituality and technology. Unbiased, Spiritual Wisdom enables the proper use of technology and is the catalyst for its increasingly rapid advancement. It may be difficult, however, to conceive that Spirituality and technology go hand in hand, but are nonetheless, the lack of Spiritual Wisdom will dampen the infinite possibilities because of a limited, diminutive belief system.


Technology ends where the mortal barrier begins, then, it becomes a necessity to look into the realm of Spirit in order to continue human evolution. Without the continuous advancement of evolution, this civilization will become dissolved and perish off the face of the earth, like the many previous civilizations before us. The mortal barrier begins when science and technology will reach the limitation of the atomic and sub-atomic particles and a quantum leap into the realm of the Waveform (Spirit) becomes a necessity in order to continue upward progress

When a person learns to find a quiet moment in their lives to be able to become mentally focussed and entered on their profession, job, Spirituality, whatever the endeavour, they will find the answers and renewed energy to solve problems and create new knowledge and ideas.

When a person (no matter who) learns to focus and concentrate on Spirit, their Mind will gather from their Cosmic Consciousness, the deepest secrets of the Universe, as to how it is composed, by what means, and to what end. But, the enigma of the deepest inner secret Nature of The all, or God will always remain unknowable to us by reason of its Infinite stature to which no human qualities can, or should, ever be ascribed.

There is more on the subject of the powerful "I" consciousness the "I Am," the "Higher Self," which is, each one of us.

In what could turn out to be one of the most important discoveries in cognitive studies of our decade, it has been found that there are five million magnetite crystals per gram in the human brain. Interestingly, The meninges, (the membrane that envelops the brain), has twenty times that number. These ‘bio magnetite' crystals demonstrate two interesting features. The first is that their shapes do not occur in nature, suggesting that they were formed in the tissue, rather than being absorbed from outside. The other is that these crystals appear to be oriented so as to maximize their magnetic moment, which tends to give groups of these crystals the capacity to act as a system. The brain has also been found to emit very low intensity magnetic fields, a phenomenon that forms the basis of a whole diagnostic field, Magnetoencephalography.

Unfortunately for the present discussion, there is no way to ‘read' any signals that might be carried by the brain’s magnetic emissions at present. We expect that subtle enough means of detecting such signals will eventually appear, as there is compelling evidence that they do exist, and constitute a means whereby communication happens between various parts of the brain. This system, we speculate, is what makes the selection of which neural areas to recruit, so that States (of consciousness) can elicit the appropriate Phenomenological, behavioural, and affective responses.

While there have been many studies that have examined the effects of magnetic fields on human consciousness, none have yielded findings more germane to understanding the role of neuromagnetic signalling than the work of the Laurentian University Behavioural Neuroscience group. They have pursued a course of experiments that rely on stimulating the brain, especially the temporal lobes, with complex low intensity magnetic signals. It turns out that different signal’s produce different phenomena.

One example of such phenomenons is vestibular sensation, in which one's normal sense of balance is replaced by illusions of motion similar to the feelings of levitation reported in spiritual literature as well as the sensation of vertigo. Transient ‘visions', whose content includes motifs that also appear in near-death experiences and alien abduction scenarios have also appeared. Positive effectual parasthesias (electric-like buzzes in the body) have occurred. Another experiences that has been elicited neuromagnetically is bursts of emotion, most commonly of fear and joy. Although the content of these experiences can be quite striking, the way they present themselves is much more ordinary. It approximates the ‘twilight state' between waking and sleep called hypnogogia. This can produce brief, fleeting visions, feelings that the bed is moving, rocking, floating or sinking. Electric-buzz like somatic sensations and hearing an inner voice call one's name can also occur in hypnogogia. The range of experiences it can produce is quite broad. If all signals produced the same phenomena, then it would be difficult to conclude that these magnetic signals approximate the postulated endogenous neuromagnetic signals that create alterations in State. In fact, the former produces a wide variety of phenomena. One such signal makes some women apprehensive, but another doesn't. One signal creates such strong vestibular sensations that one can't stand up. Another doesn't.

The temporal lobes are the parts of the brain that mediate states of consciousness. EEG readouts from the temporal lobes are markedly different when a person is asleep, having a hallucinogenic seizure, or on LSD. Siezural disorders confined to the temporal lobes (complex partial seizures) have been characterized as impairments of consciousness. There was also a study done in which monkeys were given LSD after having various parts of their brains removed. The monkeys continued to ‘trip' no matter what part or parts of their brains were missing until both temporal lobes were taken out. In these cases, the substance did not seem to affect the monkeys at all. The conclusion seems unavoidable. In addition to all their other functions (aspects of memory, language, music, etc.), the temporal lobes mediate states of consciousness.

If exposing the temporal lobes to magnetic signals can induce alterations in States, then it seems reasonable to suppose that States find part of their neural basis in our postulated neuromagnetic signals, arising out of the temporal lobes.

Hallucinations are known to be the Phenomenological correlates of altered States. Alterations in state of consciousness leads, following input, and phenomena, whether hallucinatory or not, follows in response. We can offer two reasons for drawing this conclusion.

The first is one of the results obtained by a study of hallucinations caused by electrical stimulation deep in the brain. In this study, the content of the hallucinations was found to be related to the circumstances in which they occurred, so that the same stimulations could produce different hallucinations. The conclusion was that the stimulation induced altered states, and the states facilitated the hallucinations.

The second has to do with the relative speeds of the operant neural processes.

Neurochemical response times are limited by the time required for their transmission across the synaptic gap, .5 to 2msec.

By comparison, the propagation of action potentials is much faster. For example, an action potential can travel a full centimetre (a couple of orders of magnitude larger than a synaptic gap) in about 1.3 msec. The brain's electrical responses, therefore, happen orders of magnitude more quickly than do its chemical ones.

Magnetic signals are propagated with greater speeds than those of action potentials moving through neurons. Contemporary physics requires that magnetic signals be propagated at a significant fraction of the velocity of light, so that the entire brain could be exposed to a neuromagnetic signal in vanishingly small amounts of time.

It seems possible that neuromagnetic signals arise from structures that mediate our various sensory and cognitive modalities. These signals then recruit those functions (primarily in the limbic system) that adjust the changes in state. These temporal lobe signals, we speculate, then initiate signals to structures that mediate modalities that are enhanced or suppressed as the state changes.

The problem of defining the phrase ‘state of consciousness' has plagued the field of cognitive studies for some time. Without going into the history of studies in the area, we would like to outline a hypothesis concerning states of consciousness in which the management of states gives rise to the phenomenon of consciousness

There are theories that suggest that cognitive modalities (such as memory, affect, ideation and attention) may be seen as analogs to sensory modalities.

We hypothesize that the entire set of modalities, cognitive and sensory, may be heuristically compared with a sound mixing board. In this metaphor, all the various modalities are represented as vertical rheostats with enhanced functioning increasing towards the top, and suppressed function increasing toward the bottom. Further, the act of becoming conscious of phenomena in any given modality involves the adjustment of that modality's ‘rheostat'

Sensory input from any modality can alter one's state. The sight of a sexy person, the smell of fire, the unexpected sensation of movement against one's skin (there's a bug on me!), a sudden bitter taste experienced while eating ice cream, or the sound of one's child screaming in pain; all of these phenomena can induce alterations in State. Although the phrase ‘altered states' has come to be associated with dramatic, otherworldly experiences, alterations in state, as we will be using the phrase, refer primarily to those alterations that take us from one normal state to another.

Alterations in state can create changes within the various sensory and cognitive modalities. An increase in arousal following the sight of a predator will typically suppress the sense of smell (very few are able to stop and ‘smell the roses' while a jaguar is chasing them), suppressive introspection (nobody wants to know ‘who I really am?' Nonetheless, an anaconda breeds for wrapping itself around them, suppresses sexual arousal, and alters vision so that the centre of the visual field is better attended then one's peripheral vision allowing one to see the predator's movement better? The sight of a predator will also introduce a host of other changes, all of which reflect the State.

In the Hindu epic, the Mahabharata, there is a dialogue between the legendary warrior, Arjuna, and his archery teacher. Arjuna was told by his teacher to train his bow on a straw bird used as a target. Arjuna was asked to describe the bird. He answered ‘I can't'. ‘Why not?', Asked his teacher. ‘I can only see its eye', he answered. ‘Release your arrow', commanded the teacher. Arjuna did, and hit the target in the eye. ‘I'll make you the finest archer in the world', said his teacher.

In this story, attention to peripheral vision had ceased so completely that only the very centre of his visual field received any. Our model of states would be constrained to interpret Arjuna's (mythical) feat as a behaviour specific to a state. The unique combination of sensory enhancement, heightened attention, and sufficient suppression of emotion, ideation, and introspection that support such an act suggests specific settings for our metaphorical rheostats.

Changes in state make changes in sensory and cognitive modalities, and they in turn, trigger changes in state. We can reasonably conclude that there is a feedback mechanism whereby each modality is connected to the others.

States also create tendencies to behave in specific ways in specific circumstances, maximizing the adaptivity of behaviour in those circumstances; behaviour that tends to meet our needs and respond to threats to our ability to meet those needs.

Each circumstance adjusts each modality’s setting, tending to maximize that modality's contribution to adaptive behaviour in that circumstance. The mechanism may function by using both learned and inherited default settings for each circumstance and then repeating those settings in similar circumstances later on. Sadly, this often makes states maladaptive. Habitually to alteration in State, in response to threats from an abusive parent, for example, can make for self-defeating responses to stress in other circumstances, where theses same responses are no longer advantageous.

Because different States are going to be dominated by specific combinations of modalities, it makes sense that a possible strategy for aligning the rheostats (making alterations in state) is to move them in tandem, so that after a person associates the sound of a scream to the concept of a threat, that sound, with its unique auditory signature, will cause all the affected modalities (most likely most of them in most cases) to take the positions they had at the time the association was made.

hen we say changing states, we are referring to much more than the dramatic states created by LSD, isolation tanks, REM. sleep, etc. We are also including normal states of consciousness, which we can imagine as kindled ‘default settings' of our various modalities. When any one of these settings returns to one of its default settings, it will, we conjecture, tend to entrain all the other modalities to the settings they habitually take in that state.

To accomplish this, we must suggest that each modality be connected to every other one. A sight, a smell, a sound, or a tactile feeling can all inspire fear. Fear can motivate ideation. Ideation can inspire arousal. Changes in effect can initiate alterations in introspection. Introspection alters affect. State specific settings of individual modalities could initiate settings for other modalities.

Our main hypothesis here is that all these intermodal connections, as operating as a single system, have a single Phenomenological correlate. The phenomena of subjective awareness.

The structures associated with that modality then broadcasts are neuromagnetic signals to the temporal lobes, which then produces signals that then recruits various structures throughout the brain. Specifically, those structures whose associated modalities' values must be changed in order to accomplish the appropriate alteration in state. In the second section, we found the possibility that states are settings for the variable aspects of cognitive and sensory modalities. We also offered the suggestion that consciousness is the Phenomenological correlate of the feedback between the management of states on the one hand, and the various cognitive and sensory modalities, on the other. If all of these conclusions were to stand up to testing, we could conclude that the content of the brain's hypothesized endogenous magnetic signals might consist of a set of values for adjusting each sensory and cognitive rheostat. We might also conclude that neuromagnetic signalling is the context in which consciousness occurs.

The specific mechanism whereby subjectivity is generated is out of the reach of this work. Nevertheless, we can say that the fact that multiple modalities are experienced simultaneously, together with our model's implication that they are ‘reset,' all at once, with each alteration in state suggests that our postulated neuromagnetic signals may come in pairs, with the two signals running slightly out of condition with one another. In this way, neuromagnetic signals, like the two laser beams used to produce a hologram, might be able to store information in a similar way, as has already been explored by Karl Pibhram. The speed at which neuromagnetic signals continue to propagate, and together with their capacity to recruit/alter multiple modalities suggests that the underlying mechanism have been selected to make instant choices on which specific portions to recruit in order to facilitate the behaviours acted out of the State, and to do so quickly.

In this way, the onset time for the initiation of States is kept to a minimum, and with it, the times needed to make the initial, cognitive response to stimuli. When it comes to response to threats, or sighting prey, the evolutionary advantages are obvious.

Higher-order theories of consciousness try to explain the distinctive properties of consciousness in terms of some relation obtaining between the conscious state in question and a higher-order representation of some sort (either a higher-order experience of that state, or a higher-order thought or belief about it). The most challenging properties to explain are those involved in phenomenal consciousness - the sort of state that has a subjective dimension, which has ‘feel’, or which it is like something to undergo.

One of the advances made in recent years has been in distinguishing between different questions concerning consciousness. Not everyone agrees on quite which distinctions need to be drawn. But all are agreeing that we should distinguish creature consciousness from mental-state consciousness. It is one thing to say of an individual or organism that it is conscious (either in general or of something in particular). It is quite another thing to say of one of the mental states of a creature that it is conscious.

It is also agreed that within creature-consciousness itself we should distinguish between intransitive and transitive variants. To say of an organism that it is conscious, and finds of its own sorted simplicities (intransitive) is to say just that it is awake, as opposing to an ever vanquishing state of unconsciousness, only to premises the fact, that the unconscious is literally resting, not of an awakening state. There do not appear to be any deep philosophical difficulties lurking here (or at least, they are not difficulties specific to the topic of consciousness, as opposed to mentality in general). But to say of an organism that it is conscious of such-and-such (transitive) is normally to say at least that it is perceiving such-and-such, or aware of such-and-such. So we say of the mouse that it is conscious of the cat outside its hole, in explaining why it does not come out is, perhaps, to mean that it perceives the cat's presence. To provide an account of transitive creature-consciousness would thus be to attempt a theory of perception.

There is a choice to be made concerning transitive creature-consciousness, failure to notice which may be a potential source of confusion. For we have to decide whether the perceptual state in virtue of which an organism may be said to be transitively-conscious of something must itself be a conscious one (state-conscious). If we say ‘Yes’ then we will need to know more about the mouse than merely that it perceives the cat if we are to be assured that it is conscious of the cat - we will need to establish that its percept of the cat is itself conscious. If we say ‘No’, on the other hand, then the mouse's perception of the cat will be sufficient for the mouse to count as conscious of the cat, but we may have to say that although it is conscious of the cat, the mental state in virtue of which it is so conscious is not itself a conscious one! It may be best to by-pass any danger of confusion here by avoiding the language of transitive-creature-consciousness altogether. Nothing of importance would be lost to us by doing this. We can say simply that organism O observes or perceives x. We can then assert, explicitly, that if we wish, that its percept be or is not conscious.

Turning now to the notion of mental-state consciousness, the major distinction here is between phenomenal consciousness, on the one hand - which is a property of states that it is like something to be in, which have a distinctive ‘feel’ (Nagel, 1974) - and various functionally-definable forms of access consciousness, on the other. Most theorists believe that there are mental states - such as occurrent thoughts or judgments - which are access-conscious (in whatever is the correct functionally-definable sense), but which are not phenomenally conscious. In contrast, there is considerable dispute as to whether mental states can be phenomenally-conscious without also being conscious in the functionally-definable sense - and even more dispute about whether phenomenal consciousness can be reductively explained in functional and/or representational terms.

It seems plain that there is nothing deeply problematic about functionally-definable notions of mental-state consciousness, from a naturalistic perspective. For mental functions and mental representations are the staple fares of naturalistic accounts of the mind. But this leaves plenty of room for dispute about the form that the correct functional account should take. Some claim that for a state to be conscious in the relevant sense is for it to be poised to have an impact on the organism's decision-making processes, perhaps also with the additional requirement that those processes should be distinctively rational ones. Others think that the relevant requirement for access-consciousness is that the state should be suitably related to higher-order representations - experiences and/or beliefs - of that very state.

What is often thought to be naturalistically problematic, in contrast, is phenomenal consciousness. And what is really and deeply controversial is whether phenomenal consciousness can be explained in terms of some or other functionally-definable notion. Cognitive (or representational) theories maintain that it can. Higher-order cognitive theories maintain that phenomenal consciousness can be reductively explained in terms of representations (either experiences or beliefs) which are higher-order. Such theories concern us here.

Higher-order theories, like cognitive/representational theories in general, assume that the right level at which to seek an explanation of phenomenal consciousness is a cognitive one, providing an explanation in terms of some combination of causal role and intentional content. All such theories claim that phenomenal consciousness consists in a certain kind of intentional or representational content (analog or ‘fine-grained’ in comparison with any concepts we may possess) figuring in a certain distinctive position in the causal architecture of the mind. They must therefore maintain that these latter sorts of mental property do not already implicate or presuppose phenomenal consciousness. In fact, all cognitive accounts are united in rejecting the thesis that the very properties of mind or mentality already presuppose phenomenal consciousness, as proposed by Searle (1992, 1997) for example.

The major divides among representational theories of phenomenal consciousness in general, is between accounts that are provided in purely first-order terms and those that implicate higher-order representations of one sort or another (see below). These higher-order theorists will allow that first-order accounts - of the sort defended by Dretske (1995) and Tye (1995), for example - can already make some progress with the problem of consciousness. According to first-order views, phenomenal consciousness consists in analog or fine-grained contents that are available to the first-order processes that guide thought and action. So a phenomenally-conscious percept of red, for example, consisting in a state, with which the parallel contentual representations are red under which are betokened in such a way as to take food into thoughts about red, or into actions that are in one way or another guide by way of redness. Now, the point to note in favour of such an account is that it can explain the natural temptation to think that phenomenal consciousness is in some sense ineffable, or indescribable. This will be because such states have fine-grained contents that can slip through the mesh of any conceptual net. We can always distinguish many more shades of red than we have concepts for, or could describe in language (other than indexically -, e.g., ‘That shade’)

The main motivation behind higher-order theories of consciousness, in contrast, derives from the belief that all (or at least most) mental-state types admit of both conscious and non-conscious varieties. Almost everyone now accepts, for example, (post-Freud) that beliefs and desires can be activated non-consciously. (Think, here, of the way in which problems can apparently become resolved during sleep, or while one's attention is directed to other tasks. Notice, that appearance to non-conscious intentional states is now routine in cognitive science.) And then if we ask what makes the difference between a conscious and a non-conscious mental state, one natural answer is that consciously states are states we are aware of them but not as to their actualization as based upon its nature. And if awareness is thought to be a form of creature-consciousness, then this will translate into the view that conscious states are states of which the subject is aware, or states of which the subject is creature-conscious. That is to say, these are states that are the objects of some sort of higher-order representation - whether to some higher-order of perception or experience, or a higher-order of belief or thought.

One crucial question, then, is whether perceptual states as well as beliefs admit of both conscious and non-conscious varieties. Can there be, for example, such a thing as a non-conscious visual perceptual state? Higher-order theorists are united in thinking that there can. Armstrong (1968) uses the example of absent-minded driving to make the point. Most of us at some time have had the rather unnerving experience of ‘coming to’ after having been driving on ‘automatic pilot’ while our attention was directed elsewhere - perhaps having been day-dreaming or engaged in intense conversation with a passenger. We were apparently not consciously aware of any of the route we have recently taken, nor of any of the obstacles we avoided on the way. Yet we must surely have been seeing, or we would have crashed the car. Others have used the example of blind-sight. This is a condition in which subjects have had a portion of their primary visual cortex destroyed, and apparently become blind in a region of their visual field as a result. But it has now been known for some time that if subjects are asked to guess at the properties of their ‘blind’ field (e.g., whether it contains a horizontal or vertical grating, or whether it contains an ‘X’ or an ‘O’), they prove remarkably accurate. Subjects can also reach out and grasp objects in their ‘blind’ field with something like 80% or more of normal accuracy, and can catch a ball thrown from their ‘blind’ side, all without conscious awareness.

More recently, a powerful case for the existence of non-conscious visual experience has been generated by the two-systems theory of vision proposed and defended by Milner and Goodale (1995). They review a wide variety of kinds of neurological and neuro-psychological evidence for the substantial independence of two distinct visual systems, instantiated in the temporal and parietal lobes respectively. They conclude that the parietal lobes provide a set of specialized semi-independent modules for the on-line visual control of action; Though the temporal lobes are primarily concerned with subsequent off-line functioning, such as visual learning and object recognition. And only the experiences generated by the temporal-lobe system are phenomenally conscious, on their account.

(Note that this is not the familiar distinction between what and where visual systems, but is rather a successor to it. For the temporal-lobe system is supposed to have access both to property information and to spatial information. Instead, it is a distinction between a combined what-where system located in the temporal lobes and a how-to or action-guiding system located in the parietal lobes.)

To get the flavour of Milner and Goodale's hypothesis, consider just one strand from the wealth of evidence they provide. This is a neurological syndrome called visual form agnosia, which results from damage localized to both temporal lobes, leaving primary visual cortex and the parietal lobes composed. (Visual form agnosia is normally caused by carbon monoxide poisoning, for reasons that are little understood.) Such patients cannot recognize objects or shapes, and may be capable of little conscious visual experience; still, their sensorimotor abilities remain largely intact

One particular patient has now been examined in considerable detail. While D.F. is severely agnosia, she is not completely lacking in conscious visual experience. Her capacities to perceive colours and textures are almost completely preserved. (Why just these sub-modules in her temporal cortex should have been spared is not known.) As a result, she can sometimes guess the identity of a presented object - recognizing a banana, say, from its yellow Collor and the distinctive texture of its surface. Nevertheless, she is unable to perceive the shape of the banana (whether straight or curved, say); Nor its orientation (upright or horizontal), nor of many of her sensorimotor abilities are close too normal - she would be able to reach out and grasp the banana, orienting her hand and wrist appropriately for its position and orientation, and using a normal and appropriate finger grip. Under experimental conditions it turns out that although D.F. is at chance in identifying the orientation of a broad line or letter box, she is almost normal when posting a letter through a similarly-shaped slot oriented at random angles. In the same way, although she is at chance when trying to choose as between the rectangular Forms of very different sizes, her reaching and grasping behaviours when asked to pick up such a Form are virtually indistinguishable from those of normal controls. It is very hard to make sense of this data without supposing that the sensorimotor perceptual system is functionally and anatomically distinct from the object-recognition/conscious system.

There is a powerful case, then, for thinking that there are non-conscious as well as conscious visual percepts. While the perceptions that ground your thoughts when you plan in relation to the perceived environment (‘I'll pick up that one’) may be conscious, and while you will continue to enjoy conscious perceptions of what you are doing while you act, the perceptual states that actually guide the details of your movements when you reach out and grab the object will not be conscious ones, if Milner and Goodale (1995) are correct

But what implication does this have for phenomenal consciousness? Must these non-conscious percepts also be lacking in phenomenal properties? Most people think so. While it may be possible to get oneself to believe that the perceptions of the absent-minded car driver can remain phenomenally conscious (perhaps lying outside of the focus of attention, or being instantly forgotten), it is very hard to believe that either blind-sight percepts or D.F.'s sensorimotor perceptual states might be phenomenally conscious ones. For these perceptions are ones to which the subjects of those states are blind, and of which they cannot be aware. And the question, then, is what makes the relevant difference? What is it about a conscious perception that renders it phenomenal, which a blind-sight perceptual state would correspondingly lack? Higher-order theorists are united in thinking that the relevant difference consists in the presence of something higher-order in the first case that is absent in the second. The core intuition is that a phenomenally conscious state will be a state of which the subject is aware.

What options does a first-order theorist have to resist this conclusion? One is to deny the data, it can be said that the non-conscious states in question lack the kind of fineness of grain and richness of content necessary to count as genuinely perceptual states. On this view, the contrast discussed above isn't really a difference between conscious and non-conscious perceptions, but rather between conscious perceptions, on the one hand, and non-conscious belief-like states, on the other. Another option is to accept the distinction between conscious and non-conscious perceptions, and then to explain that distinction in first-order terms. It might be said, for example, that conscious perceptions are those that are available to belief and thought, whereas non-conscious ones are those that are available to guide movement. A final option is to bite the bullet, and insist that blind-sight and sensorimotor perceptual states are indeed phenomenally conscious while not being access-conscious. On this account, blind-sight percepts are phenomenally conscious states to which the subjects of those states are blind. Higher-order theorists will argue, of course, that none of these alternatives is acceptable.

In general, then, higher-order theories of phenomenal consciousness claim the following: A phenomenally conscious mental state is a mental state (of a certain sort - see below) which either is, or is disposed to be, the object of a higher-order representation of a certain sort. Higher-order theorists will allow, of course, that mental states can be targets of higher-order representation without being phenomenally conscious. For example, a belief can give rise to a higher-order belief without thereby being phenomenally conscious. What is distinctive of phenomenal consciousness is that the states in question should be perceptual or quasi-perceptual ones (e.g., visual images as well as visual percepts). Moreover, most cognitive/representational theorists will maintain that these states must possess a certain kind of analog (fine-grained) or non-conceptual intentional content. What makes perceptual states, mental images, bodily sensations, and emotions phenomenally conscious, on this approach, is that they are conscious states with analog or non-conceptual contents. So putting these points together, we get the view that phenomenally conscious states are those states that possess fine-grained intentional contents of which the subject is aware, being the target or potential target of some sort of higher-order representation.

There are then two main dimensions along which higher-order theorists disagree among themselves. One relate to whether the higher-order states in question are belief-like or perception-like. That taking to the former option is higher-order thought theorists, and those taking the latter are higher-order experience or ‘inner-sense’ theorists. The other disagreement is internal to higher-order thought approaches, and concerns whether the relevant relation between the first-order state and the higher-order thought is one of availability or not. That is, the question is whether a state is conscious by virtue of being disposed to give rise to a higher-order thought, or rather by virtue of being the actual target of such a thought. These are the options that will now concern us.

According to this view, humans not only have first-order non-conceptual and/or analog perceptions of states of their environments and bodies, they also have second-order non-conceptual and/or analog perceptions of their first-order states of perception. Humans (and perhaps other animals) not only have sense-organs that scan the environment/body to produce fine-grained representations that can then serve to ground thoughts and action-planning, but they also have inner senses, charged with scanning the outputs of the first-order senses (i.e., perceptual experiences) to produce equally fine-grained, but higher-order, representations of those outputs (i.e., to produce higher-order experiences). A version of this view was first proposed by the British Empiricist philosopher John Locke (1690). In our own time it has been defended especially by Armstrong.

(A terminological point: this view is sometimes called a ‘higher-order experience (HOE) theory’ of phenomenal consciousness; But the term ‘inner-sense theory’ is more accurate. For as we will see in section 5, there are versions of a higher-order thought (HOT) approaches that also implicate higher-order perceptions, but without needing to appeal to any organs of inner sense.

(Another terminological point: ‘Inner-sense theory’ should more strictly be called ‘higher-order-sense theory’, since we of course have senses that are physically ‘inner’, such as pain-perception and internal touch-perception, which are not intended to fall under its scope. For these are first-order senses on a par with vision and hearing, differing only in that their purpose is to detect properties of the body rather than of the external world. According to the sort of higher-order theory under discussion in this section, these senses, too, determine what needs have their outputs scanned to produce higher-order analog contents in order for them to become phenomenally conscious. In what follows, however, the term ‘inner sense’ will be used to mean, more strictly, ‘higher-order sense’, since this terminology is now pretty firmly established.)

A phenomenally conscious mental state is a state with analog/non-conceptual intentional content, which is in turn the target of a higher-order analog/non-conceptual intentional state, via the operations of a faculty of ‘inner sense’.

On this account, the difference between a phenomenally conscious percept of red and the sort of non-conscious percepts of red that guide the guesses of a blind-sighted and the activity of sensorimotor system, is as follows. The former is scanned by our inner senses to produce a higher-order analog state with the content experience of red or seems red, whereas the latter states are not - they remain merely first-order states with the analog content red. In so remaining, they lack any dimension of seeming or subjectivity. According to inner-sense theory, it is our higher-order experiential themes produced by the operations of our inner-senses which make some mental states with analog contents, but not others, available to their subjects. And these same higher-order contents constitute the subjective dimension or ‘feel’ of the former set of states, thus rendering them phenomenally conscious.

One of the main advantages of inner-sense theory is that it can explain how it is possible for us to acquire purely recognisable concepts of experience. For if we possess higher-order perceptual contents, then it should be possible for us to learn to recognize the occurrence of our own perceptual states immediately - or ‘straight off’ - grounded in those higher-order analog contents. And this should be possible without those recognizable concepts thereby having any conceptual connections with our beliefs about the nature or content of the states recognized, nor with any of our surrounding mental concepts. This is then how inner-sense theory will claim to explain the familiar philosophical thought-experiments concerning one's own experiences, which are supposed to cause such problems for physicalist/naturalistic accounts of the mind.

For example, I can think, ‘This type of experience [as of red] might have occurred in me, or might normally occur in others, in the absence of any of its actual causes and effects.’ So on any view of intentional content that sees content as tied to normal causes (i.e., to information carried) and/or to normal effects (i.e., teleological or an inferential role), this type of experience might occur without representing red. In the same sort of way, I will be able to think, ‘This type of experience [pain] might have occurred in me, or might occur in others, in the absence of any of the usual causes and effects of pains. There could be someone in whom these experiences occur but who isn't bothered by them, and where those experiences are never caused by tissue damage or other forms of a bodily insult. And conversely, there could be someone who behaves and acts just as I do when in pain, and in response to the same physical causes, but who is never subject to this type of experience.’ If we possess purely recognizable concepts of experience, grounded in higher-order percepts of those experiences, then the thinkability of such thoughts is both readily explicable, and apparently unthreatening to a naturalistic approach to the mind.

Inner-sense theory does face a number of difficulties, however. If inner-sense theory were true, then how is it that there is no phenomenology distinctive of inner sense, in the way that there is a phenomenology associated with each outer sense? Since each of the outer senses gives rise to a distinctive set of Phenomenological properties, you might expect that if there were such a thing as inner sense, then there would also be a phenomenology distinctive of its operation. But there doesn't appear to be any.

This point turns on the so-called ‘transparency’ of our perceptual experience (Harman, 1990). Concentrate as hard as you like on your ‘outer’ (first-order) experiences - you will not find any further Phenomenological properties arising out of the attention you pay to them, beyond those already belonging to the contents of the experiences themselves. Paying close attention to your experience of the Collor of the red rose, for example, just produces attention to the redness - a property of the rose. But put like this, however, the objection just seems to beg the question in favour of first-order theories of phenomenal consciousness. It assumes that first-order - ‘outer’ - perceptions already have a phenomenology independently of their targeting by inner sense. But this is just what an inner-sense theorist will deny. And then in order to explain the absence of any kind of higher-order phenomenology, an inner-sense theorist only needs to maintain that our higher-order experiences are never themselves targeted by an inner-sense-organ that might produce third-order analog representations of them in turn.

Another objection to inner-sense theory is as follows if there really were an organ of inner sense, then it ought to be possible for it to malfunction, just as our first-order senses sometimes do. And in that case, it ought to be possible for someone to have a first-order percept with the analog content red causing a higher-order percept with the analog content seems-orange. Someone in this situation would be disposed to judge, ‘It is rouge red, but, till, it immediately stands as non-inferential (i.e., not influenced by beliefs about the object's normal Collor or their own physical state). But at the same time they would be disposed to judge, ‘It seems orange’. Not only does this sort of thing never apparently occur, but the idea that it might do so conflicts with a powerful intuition. This is that our awareness of our own experiences is immediate, in such a way that to believe that you are undergoing an experience of a certain sort is to be undergoing an experience of that sort. But if inner-sense theory is correct, then it ought to be possible for someone to believe that they are in a state of seeming-orange when they are actually in a state of seeming-red.

A different sort of objection to inner-sense theory is developed by Carruthers (2000). It starts from the fact that the internal monitors postulated by such theories would need to have considerable computational complexity in order to generate the requisite higher-order experiences. In order to perceive an experience, the organism would need to have mechanisms to generate a set of internal representations with an analog or non-conceptual content representing the content of that experience, in all its richness and fine-grained detail. And notice that any inner scanner would have to be a physical device (just as the visual system of itself is) which depends upon the detection of those physical events in the brain that is the output of the various sensory systems (just as the visual system is a physical device that depends upon detection of physical properties of surfaces via the reflection of light). For it is hard to see how any inner scanner could detect the presence of an experience as experience. Rather, it would have to detect the physical realizations of experiences in the brain, and construct the requisite higher-order representation of the experiences that those physical events realize, on the basis of that physical-information input. This makes is seem inevitable that the scanning device that supposedly generates higher-order experiences of our first-order visual experience would have to be almost as sophisticated and complex as the visual system itself

Now the problem that arises here is this. Given this complexity in the operations of our organs of inner sense, there had better be some plausible story to tell about the evolutionary pressures that led to their construction. For natural selection is the only theory that can explain the existence of organized functional complexity in nature. But there would seem to be no such stories on the market. The most plausible suggestion is that inner-sense might have evolved to subserve our capacity to think about the mental states of conspecific, thus enabling us to predict their actions and manipulate their responses. (This is the so-called ‘Machiavellian hypothesis’ to explain the evolution of intelligence in the great-ape lineage. But this suggestion presupposes that the organism must already have some capacity for higher-order thought, since such thoughts in which an inner sense is supposed to subserve. And yet, some higher-order thought theories can claim all of the advantages of inner-sense theory as an explanation of phenomenal consciousness, but without the need to postulate any ‘inner scanners’. At any rate, the ‘computational complexity objection’ to inner-sense theories remains as a challenge to be answered.

Non-dispositionalist higher-order thought (HOT) theory is a proposal about the nature of state-consciousness in general, of which phenomenal consciousness is but one species. Its main proponent has been Rosenthal. The proposal is this: a conscious mental state M, of mine, is a state that is actually causing an activated belief (generally a non-conscious one) that I have M, and causing it non-inferentially. (The qualification concerning non-inferential causation is included to avoid one having to say that my non-conscious motives become conscious when I learn of them under psychoanalysis, or that my jealousy is conscious when I learn of it by interpreting my own behaviour.) An account of phenomenal consciousness can then be generated by stipulating that the mental state M should have an analog content in order to count as an experience, and that when M is an experience (or a mental image, bodily sensation, or emotion), it will be phenomenally conscious when (and only when) suitably targeted.

A phenomenally conscious mental state is a state with analog/non-conceptual intentional content, which is the object of a higher-order thought, and which causes that thought non-inferentially.

This account avoids some of the difficulties inherent in inner-sense theory, while retaining the latter's ability to explain the distinction between conscious and non-conscious perceptions. (Conscious perceptions will be analog states that are targeted by a higher-order thought, whereas perceptions such as those involved in blind-sight will be non-conscious by virtue of not being so targeted.) In particular, it is easy to see a function for higher-order thoughts, in general, and to tell a story about their likely evolution. A capacity to entertain higher-order thoughts about experiences would enable a creature to negotiate the is and seems distinction, perhaps learning not to trust its own experiences in certain circumstances, and to induce appearances in others, by deceit. And a capacity to entertain higher-order thoughts about thoughts (beliefs and desires) would enable a creature to reflect on, and to alter, its own beliefs and patterns of reasoning, as well as to predict and manipulate the thoughts and behaviours of others. Indeed, it can plausibly be claimed that it is our capacity to target higher-order thoughts on our own mental state in which underlies our status as rational agents. One well-known objection to this sort of higher-order thought theory is due to Dretske (1993). We are asked to imagine a case in which we carefully examine two line-drawings, say (or in Dretske's example, two patterns of differently-sized spots). These drawings are similar in almost all respects, but differ in just one aspect - in Dretske's example, one of the pictures contains a black spot that the other lacks. It is surely plausible that, in the course of examining these two pictures, one will have enjoyed a conscious visual experience of the respect in which they differ -, e.g., of the offending spot. But, as is familiar, one can be in this position while not knowing that the two pictures are different, or in what way they are different. In which case, since one can have a conscious experience (e.g., of the spot) without being aware that one is having it, consciousness cannot require higher-order awareness.

Replies to this objection have been made by Seager (1994) and by Byrne (1997). They point out that it is one thing to have a conscious experience of the aspect that differentiates the two pictures, and quite another to experience consciously that the two pictures are differentiated by that aspect. That is, seeing the extra spot in one picture needn't mean seeing that this is the difference between the two pictures. So while scanning the two pictures one will enjoy conscious experience of the extra spot. A higher-order thought theorist will say that this means undergoing a percept with the content spot here which forms the target of a higher-order belief that one is undergoing a perception with that content. But this can perfectly well be true without undergoing a percept with the content spot here in this picture but absent here in that one. And it can also be true without forming any higher-order belief to the effect that one is undergoing a perception with the content spot here when looking at a given picture but not when looking at the other. In which case the purported counter-example isn't really a counter-example.

A different sort of problem with the Non-dispositionalist version of higher-order thought theory relates to the huge number of beliefs that would have to be caused by any given phenomenally conscious experience. (This is the analogue of the ‘computational complexity’ objection to inner-sense theory, Consider just how rich and detailed a conscious experience can be. It would seem that there can be an immense amount of which we can be consciously aware at any-one time. Imagine looking down on a city from a window high up in a tower-Form, for example. In such a case you can have phenomenally conscious percepts of a complex distribution of trees, roads, and buildings, colours on the ground and in the sky above, moving cars and pedestrians, . . . and so on. And you can - it seems - be conscious of all of this simultaneously. According to Non-dispositionalist higher-order thought theory, then, you would need to have a distinct activated higher-order belief for each distinct aspect of your experience is that, of just a few such beliefs with immensely complex contents. By contrast, the objection is the same, for which it seems implausible that all of this higher-order activity should be taking place, even if non-consciously, in every time someone is the subject of a complex conscious experience. For what would be the point? And think of the amount of cognitive space that these beliefs would take up,

This objection to Non-dispositionalist forms of higher-order thought theory is considered at some length in Carruthers (2000), where a variety of possible replies are discussed and evaluated. Perhaps the most plausible and challenging such replies would be to deny the main premise lying behind the objection, concerning the rich and integrated nature of phenomenally conscious experience. Rather, the theory could align itself with Dennett's (1991) conception of consciousness as highly fragmented, with multiple streams of perceptual content being processed in parallel in different regions of the brain, and with no stage at which all of these contents are routinely integrated into a phenomenally conscious perceptual manifold. Rather, contents become conscious on a piecemeal basis, as a result of internal or external probing that gives rise to a higher-order belief about the content in question. (Dennett himself sees this process as essentially linguistic, with both probes and higher-order thoughts being formulated in natural language. This variant of the view, although important in its own right, is not relevant to our present concerns.) This serves to convey to us the mere illusion of riches, because wherever we direct our attention, there we find a conscious perceptual content. It is doubtful whether this sort of ‘fragmental’ account can really explain the phenomenology of our experience, however. For it still faces the objection that the objects of attention can be immensely rich and varied at any given moment, hence requiring there to be an equally rich and varied repertoire of higher-order thoughts tokened at the same time. Think of immersing yourself in the colours and textures of a Van Gogh painting, for example, or the scene as your look out at your garden - it would seem that one can be phenomenally conscious of a highly complex set of properties, which one could not even begin to describe or conceptualize in any detail. However, since the issues here are large and controversial, it cannot yet be concluded that Non-dispositionalist forms of higher-order thought theory have been decisively refuted.

According to all forms of dispositionalist higher-order thought theory, the conscious status of an experience consists in its availability to higher-order thought (Dennett, 1978). As with the Non-dispositionalist version of the theory, in its simplest form we have here a quite general proposal concerning the conscious status of any type of occurrent mental state, which becomes an account of phenomenal consciousness when the states in question are experiences (or images, emotions, etc.) with analog content. The proposal is this: a conscious mental event M, of mine, is one that is disposed to cause an activated belief (generally a non-conscious one) that I have M, and to cause it non-inferentially.

A phenomenally conscious mental state is a state with analog/non-conceptual intentional content, which is held in a special-purpose short-term memory store in such a way as to be available to cause (non-inferentially) higher-order thoughts about any of the contents of that store.

In contrast with the Non-dispositionalist form of theory, the higher-order thoughts that render a percept conscious are not necessarily actual, but potential, on this account. So the objection now disappears, that an unbelievable amount of cognitive space would have to be taken up with every conscious experience. (There need not actually be any higher-order thought occurring, in order for a given perceptual state to count as phenomenally conscious, on this view.) So we can retain our belief in the rich and integrated nature of phenomenally conscious experience - we just have to suppose that all of the contents in question are simultaneously available to higher-order thought. Nor will there be any problem in explaining why our faculty of higher-order thought should have evolved, nor why it should have access to perceptual contents in the first place - this can be the standard sort of story in terms of Machiavellian intelligence.

It might be wondered how their mere availability to higher-order thoughts could confer on our perceptual states the positive properties distinctive of phenomenal consciousness - that is, of states having a subjective dimension, or a distinctive subjective feel. The answer may lie in the theory of content. Suppose that one agrees with Millikan (1984) that the representational content of a state depends, in part, upon the powers of the systems that consume that state. That is, suppose one thinks that what a state represents will depend, in part, on the kinds of inferences that the cognitive system is prepared to make in the presence of that state, or on the kinds of behavioural control that it can exert. In which case the presence of first-order perceptual representations to a consumer-system that can deploy a ‘theory of mind’, and which is capable of recognitizable applications of theoretically-embedded concepts of experience, may be sufficient to render those representations at the same time as higher-order ones. This would be what confers on our phenomenally conscious experiences the dimension of subjectivity. Each experience would at the same time (while also representing some state of the world, or of our own bodies) be a representation that we are undergoing just such an experience, by virtue of the powers of the ‘theory of mind’ consumer-system. Each percept of green, for example, would at one and the same time be an analog representation of green and an analog representation of seems green or experience of green. In fact, the attachment of a ‘theory of mind’ faculty to our perceptual systems may completely transform the contents of the latter's outputs.

This account might seem to achieve all of the benefits of inner-sense theory, but without the associated costs. (Some potential drawbacks will be noted in a moment.) In particular, we can endorse the claim that phenomenal consciousness consists in a set of higher-order perceptions. This enables us to explain, not only the difference between conscious and non-conscious perception, but also how analog states come to acquire a subjective dimension or ‘feel’. And we can also explain how it can be possible for us to acquire some purely recognitizable concepts of experience (thus explaining the standard philosophical thought-experiments). But we don't have to appeal to the existence of any ‘inner scanners’ or organs of inner sense (together with their associated problems) in order to do this. Moreover, it should also be obvious why there can be no question of our higher-order contents getting out of line with their first-order counterparts, in such a way that one might be disposed to make recognitizable judgments of red and seems orange at the same time. This is because the content of the higher-order experience is parasitic on the content of the first-order one, being formed from it by virtue of the latter's availability to a ‘theory of mind’ system.

On the downside, for which the account is not neutral on questions of semantic theory. On the contrary, it requires us to reject any form of pure input-semantics, in favour of some sort of consumer-semantics. We cannot then accept that intentional content reduces to informational content, nor that it can be explicated purely in terms of causal covariance relations to the environment. So anyone who finds such views attractive will think that the account is a hard one to swallow.

What will no doubt be seen by most people as the biggest difficulty with dispositionalist higher-order thought theory, however, is that it may have to deny phenomenal consciousness to most species of non-human animals. This objection will be discussed, among others, in the section following, since it can arguably also be raised against any form of higher-order theory.

There has been the whole host of objections raised against higher-order theories of phenomenal consciousness. Unfortunately, many of these objections, although perhaps intended as objections to higher-order theories as such, are often framed in terms of one or another particular version of such a theory. One general moral to be taken away from the present discussion should then be this: the different versions of a higher-order theory of phenomenal consciousness need to be kept distinct from one another, and critics should take care to state which version of the approach is under attack, or to frame objections that turn merely on the higher-order character of all of these approaches.

One generic objection is that higher-order theory, when combined with plausible empirical claims about the representational powers of non-human animals, will conflict with our commonsense intuition that such animals enjoy phenomenally conscious experience. This objection can be pressed most forcefully against higher-order thought theories, of either variety; However it is also faced by inner-sense theory (depending on what account can be offered of the evolutionary function of organs of inner sense). Since there is considerable dispute as to whether even chimpanzees have the kind of sophisticated ‘theory of mind’ which would enable them to entertain thoughts about experiential states as such (Byrne and Whiten, 1988, 1998; Povinelli, 2000), it seems most implausible that many other species of a mammal (let alone reptiles, birds and fish) would qualify as phenomenally conscious, on these accounts. Yet the intuition that such creatures enjoy phenomenally conscious experiences is a powerful and deep-seated one, for many people.

The grounds for this commonsense intuition can be challenged, however. (How, after all, are we supposed to know whether it is like something to be a bat?) And that intuition can perhaps be explained away as a mere by-product of imaginative identification with the animal. (Since our images of their experiences are phenomenally conscious, that the experience’s imageable is similarly conscious. But there is no doubt that one crux of resistance to higher-order theories will lie here, for many people.

Another generic objection is that higher-order approaches cannot really explain the distinctive properties of phenomenal consciousness. Whereas the argument from animals is that higher-order representations aren't necessary for phenomenal consciousness, the argument here is that such representations aren't sufficient. It is claimed, for example, that we can easily conceive of creatures who enjoy the postulated kinds of higher-order representation, related in the right sort of way to their first-order perceptual states, but where those creatures are wholly lacking in phenomenal consciousness.

In response to this objection, higher-order theorists will join forces with first-order theorists and others in claiming that these objectors pitch the standards for explaining phenomenal consciousness too high. We will insist that a reductive explanation of something - and of phenomenal consciousness in particular - don’t have to be such that we cannot conceive of the explanandum (that which is being explained) in the absence of the explanans (that which does the explaining). Rather, we just need to have good reason to think that the explained properties are constituted by the explaining ones, in such a way that nothing else needed to be added to the world once the explaining properties were present, in order for the world to contain the target phenomenon. But this is disputed territory. And it is on this ground that the battle for phenomenal consciousness may ultimately be won or lost

While orthodox medical research adheres to a linear, deterministic physical model, alternative therapist typically theorize upon that which is indeterminately nonphysical and nonlinear relationships are significant to outcome and patient satisfaction. The concept of nonlocal reality as nuocontinuum helps resolve the differences in therapeutic approach, and lets us frame a worldview that recognizes the great value of both reductive science and holistic integration. It helps distinguish the levels of description appropriate to the discussion of each, and helps in examining the relationships among consciousness, nonlocal reality, and healing.

Most recently addressed is to some informal discussion for which the problems of evaluating alternative therapies, but Dossey highlighted the stark philosophic division between orthodox and alternative health care models. While orthodox medical research adheres to a linear, deterministic physical model, alternative therapist typically postulates that indeterminate nonphysical and nonlinear relationships are significant to outcome and patient satisfaction. As Dossey summarizes that position, "Everything that counts cannot be counted."

The problems, of course, go beyond the research issues. The respective models bring different attitudes and approaches to the therapeutic encounter. Further, their different philosophic languages limit discussions among practitioners. Rapproachment becomes all the more unlikely when each camp considers the other, "wrong." It is believed to be helpful if we were to visualize the conflict as deriving from different frames of reference. Our collective task then becomes the finding of a common frame of reference a "cosmos in common," to echo Heraclitus sufficiently broad and deep to encompass both linear and nonlinear, local and nonlocal therapeutic points of view.

If we are to remain true to science, we must integrate the data that science provides us, and be willing to follow where the process leads. It is increasingly apparent that physics requires us to acknowledge meta considerations, that is, considerations that lie above and beyond physics. Those of us biomedical practitioners who base our work on physics cannot disparage as "merely metaphysics" a meta physics to which physics itself points.

As a point of departure, I would like to "frame" in general outlines a worldview that recognizes the great value of both reductive science and holistic integration, and which helps distinguish the levels of description appropriate to the discussion of each. In doing so, I will suggest a new and unweighted ecumenical term for discussing the relationships among consciousness, nonlocal reality, and healing.

The cosmos is the general descriptive term for all-that-is, which we have come to understand as an organic system of interrelated nested subsystems. Yet its most ancient representation in art is a circle. In our ordinary positivist view of things conditioned by science, the term denotes only the material nature of the universe, governed by the laws of physics. In the ordinary local cause-effect world, time-distance relationships apply, and the speed limit is that of light. Actions are mediated through a field, and forces are dissipated over distance.

However, Bell's Theorem in quantum physics establishes that "underneath" ordinary space-time phenomena there lies a deep nonlocal reality in which none of these limitations applies. To diagram cosmos one must find an appropriate way to divide the one circle. We might add an inner concentric circle, but the cosmos, as the term is currently used, would identify only the outer material "shell" of our experience of physical things. We have no agreed technical term for that which is "more" than matter, or beyond or outside it, or inside it. Syche has scientific validity as a psychological term. It denotes an inner personal dimension representing that aspect of experience that is normally unconscious to us, but which nevertheless influences individual human behaviour. However, in ordinary usage, the term psyche (soul, spirits) has no meaning apart from the individual human personality. To speak of the soul or spirit of matter (one hardly dare do so publicly) does not compute. Yet, now physics says there is a nonlocal more to the matter-work of the cosmos, and that domain is somehow related to the existence of consciousness.

But there needs to be still another inner concentric circle, or at least a centre-point. Cosmologists are beginning to speak more openly about a purposeful cosmos. For example, Hawking has asked, "Why does the universe go to the integral of the bother of existing?" If science is to ask "Why" as Hawking does, it must seek the "meaning" of matter. But Meaning ordinarily has no significance in science. To speak of meaning is to speak of significance or order beyond superficial appearances. To speak of meaning in relation to the cosmos is to speak of metaphysics, the realm of religion and philosophy.

Yet, such meaning is implicit in the anthropic principle of physics, and in the strange attractors by which order emerges from chaotic chemical and nonlinear mathematical systems. Though such meaning is an idea new to modern science, religion and philosophy have variously described it as logos, to Way, and Word. In that of residing in "lure" of an orienting change, as mentioned by Whitehead, and in the function of the radial energy of which Teilhard spoke.

Now, on scientific grounds alone, we must devise a "cosmorama" of at least three compartments, if it is to encompass the phenomena of the universe. Resolving and explaining these relationships may be quite complex; or it may be surprisingly simple. In any case, there are a number of questions to be answered, and a number of problems in physics and psychology that invite us to frame a unification theory.

One principal problems in quantum physics is the question of observer effect. What is the role of consciousness in resolving the uncertainties of actions at the quantum level? Before an observation, the question of whether a quantum event has occurred can be resolved only by calculating a probability. The unconscious reality of the event is that it is a mix of the probabilities that it has happened and that it has not. That "wave function" of probabilities is said to "collapse" only at the point of observation, that is, only in the interaction of unconsciousness with consciousness.

Schrödinger illustrated the problem by describing a thought experiment involving a cat in a sealed box: If the quantum event happened, the cat would be poisoned; if not, when the box was opened, the cat would be found alive. Until then, we could know the result only as a calculation of probabilities. Under the condition’s Schrödinger described, we may think of the cat's condition only mathematically: the cat is both dead and alive, with equal probability. Only by the interaction of event with observer is the "wave function collapsed."

If a tree falls in the forest when there is no one present to hear it, has there been a sound? That question can be resolved by adjusting the definition of sound. In the question of the quantum "event in the box" we are dealing with something much more fundamental. Can creation occur without an observer? Without consciousness? Or without at least the prospect of consciousness emerging from the act of creation? That may be the most basic question that begs resolving.

Another of our unification problems is the virtual particle phenomenon. Some particles appear unpredictably, exist for extremely short periods of time, then disappear. Why does a particle appear in the force field suddenly, without apparent cause? What distinguishes stable particles from the temporary ones? Something in the force field? Something related to the act of observation?

Another major concern of physics is the unification of the elemental physical forces. Study of the "several" forces has progressively merged them. Electricity and magnetism came to be understood as one force, not two. More recently, effects associated with the weak nuclear force were reconciled with electromagnetism, so that now we recognize one electroweak force. Further, there have been mathematical demonstrations that unify the electroweak and the strong nuclear force.

If it could be demonstrated that the "electronuclear" force and the force of gravity are one super force (as has been widely expected), energy effects at the largest and the smallest scales of the universe would be explained. That unification process has led to a theory of a multidimensional universe, in which there are at least seven "extra" dimensions that account for the forces and the conservation laws (symmetries) of physics. They are not extra dimensions of space-time, for which one could devise bizarre travel itineraries, but abstract mathematical dimensions that in some sense constitute the nonlocal (non-space time) reality within which cosmos resides.

However, the search for a unified theory has led to an apparent impasse, for theories of unification seem also to require a continuing proliferation of particles. A new messenger particle (or class of particles) called the Higgs boson, seems to be needed to explain how particles acquire mass, and to avoid having infinity terms (the result of a division by zero) crop up in the formulas that unify the forces. Leon Lederman, experimental physicist and Nobelist, calls it "The God Particle." He writes, "The Higgs field, the standard model, and our picture of how God made the universe depend on finding the Higgs boson."

Still, major questions remain. To some others, particle physics has seemed to reach its limit, theoretically as well as experimentally. Oxford physicist Roger Penrose has written: If there is to be a final theory, it could only be a scheme of a very different nature. Rather than being a physical theory in the ordinary sense, it would have to remain a principle, as a mathematical principle of whose implementation might have itself involve nonmechanical subtlety.

Perhaps the time has come for us to accept that cosmos has "infinity terms" after all.

Psychology is conventionally defined as the study of behaviour, but for our purposes, it must be returned to the meaning implied in the roots of the word: the study of soul and spirit. Of course, the most obvious phenomenon of psychology is the emergence of consciousness. In the light of the anthropic principle of physics, we now must ask, as a distinctively psychological question, what purpose for the cosmos does consciousness serve?

Another question: Jung has presented the evidence for an archetypal collective unconscious that, on the basis of current understandings, must certainly be inherited as the base-content of human nature. Archetypal genetics has yet to be defined. Symbol processing certainly does have its "local" physical aspect, in the function of the brain and the whole-body physiology that supports it. Nonetheless, that there is a nonlocal reality undergirding psyche is readily evident.

The reality of the dream experience is nonlocal, unconfined by rules of time and space and normal effect. Further, it is nonlocal in that the reality extends beyond the individual, consistently following patterns evident throughout the recorded history of dream and myth. The psyche functions as though the brain, or at least its mechanisms of consciousness, is "observer" for the dream "event in the box" of an unconscious nonlocal collective reality. The archetypal unconscious suggests that there be a psychological substrate from which consciousness and its content have emerged.

In the emergence of consciousness primally, and in the extension of consciousness in modern people through the dreaming process, the collective unconscious (self) seems to serve a nonlocal integrating function, yielding images that the conscious (ego) must differentiate from its "local" observations of the external space-time world. Thus, is consciousness extended.

In that process, however, the ego must self-reflectively also "keep in mind" that our perception of the external physical world is not the reality of the physical world, but an interpretation of it; Nor is the external phenomenal world the only reality. To keep our interpretations of the physical world "honest," we must subject observation to tests of consistency and reason, but the calculus of consciousness is the calculus of whole process, both differential and integral. Consciousness cannot be extended, but is diminished, when it denies the reality of the unconscious.

Jung has also pointed to certain meaningful associations between events in psyche and events in the physical world, but which are not related causally. He called such an association a synchronicity, which he defines as "an accusal connecting principle." These are simultaneous or closely associated conversions that not have connected physically, in any ordinary cause-effect way. However, they are connected meaningfully, that is, psychically. They may have very powerful impact on a person's psychic state and on the subsequent unfolding of personality. Jung studied them with Wolfgang Pauli, a quantum physicists in whose life such phenomena were overly frequent.

A synchronicity seems to suggest that a nonlocal psychological reality either communicate with or is identical to the nonlocal reality known in physics. Since it is inconceivable to have two nonlocal realities coexisting separately from one, another, we can confidently assert that there is indeed, only one nonlocal reality.

Another set of phenomena inviting consideration is that which includes group hysteria and mob action. A classic example is that of a high school band on a bus trip, on which all members get "food poisoning" simultaneously before a big game. After exhaustive epidemiological work, no evidence of infection or toxins is found, and the "cause" is attributed to significant amounts where stress and the power of suggestion lay. The mechanisms are entirely unconscious to the band members; it is as though their psyches have "communicated" in a way that makes them act together. Similarly, in mob action, though the members may be conscious of the anger that moves them, generally the event seems to be loaded with an unconscious dynamic within the group that prepares the way for the event itself.

Physicist Paul Davy has written that one of the basic problems is constructing an adequate definition of the dimensionality. The ordinary dictionary definition describes a dimension in terms of magnitude or direction (height, depth, width), and we ordinarily think of the dimensions as perpendicular to each other. But that works only for the familiar spatial dimensions and the actions of ordinary objects. Imagine compressing all three-dimensional space toward a single point; As it comes close to a point, the concept of being perpendicular loses all meaning. Another problem is that it does not really make sense to think of time (which is a dimension, too) as perpendicular to anything.

A dimension is one of the domains of action permitted to or on an object. By domain I mean something like a field of influence or action. Verticality is not a thing that acts on an object, but is rather than which permits and influences a movement in space, and which influences our description of the movement. For example, verticality is one particular aspect of abstract reality that determines the behaviour of an object. But the abstract is real! Take verticality away from three-dimensional space, and an object is permitted to move only in a way that we can analyse as a mix of horizontal and forward-backward motions. Take the horizontal away, and the object may move only along a straight line (one-dimensional space). "String" theories, which approach a "Grand Unification" of all of the physical forces, posit dimensions beyond the four of the space-time. There is no theoretical limit to the number of dimensions, for external to space-time there is no concept of "container" or limit.

Since all of the non-space time dimensions, by definition, are not extended in space or time, we must conceive of them as represented by points. Since they act together of o space-time, they must "intersect" or somehow communicate with the primal space-time point. For that reason (and because in the absence of space-time no point can be offset from another), we must imagine the dimensions as many points superimposed into one. Let's call it the SuperPaint. We may in fact imagine as many superimposed points (dimensions) as past and future experiments might require to explain the phenomena of creation.

The initial conditions of our space-time universe are defined in that one SuperPaint; the Big Bang represents the explosive expansion of four of those dimensions, space-time. The creation-energy (super force) responsible for that expansion is concentrated in and at the multidimensional SuperPaint. Yet we must also think of other changes at the SuperPaint, for as energy levels dissipate immediately after the Big Bang, the super force quickly "evolves" into the four physical forces conventionally known.

We have said that only the space-time dimensions are expanding, because the force dimensions ("contained" in the SuperPaint) are not spatial. By definition, we may not imagine non-space time points as extended in space. However, all points in expanding space-time must still "communicate" with the force dimensions (and the symmetry dimensions, but we are neglecting them for the moment). All points in space-time must intersect the force dimensions.

It is as if the force dimensions too have been expanded to the size of space-time, for they are acting on each particle of energy/matter in the universe. One might imagine that one point has been stretched as a featureless elastic sheet, a continuum in which the point is everywhere the same.

However, quantum theory deals with these forces as discrete waves/particles. For example, the force of gravity is communicated by gravitons; The strong nuclear force by gluons is the electromagnetic force by photons. If we conceive the stretched points of the dimensions as "sheets," the sheets must have waves in them. These "stretched sheets" which constitute the field in which energy interacts with particles to sustain (and indeed, to continue the creation of) the universe. As I have expressed it in a poem, it is the field "where the forces play pinball / with gravitons and gluons / and modulate / the all."

Let us imagine again that space-time (four dimensions) is compressed toward a point. It is futile to ask what is outside that small pellet of space-time, for the concept of "outside ness" has no meaning but within space-time. As the pellet becomes smaller still, it shrinks toward nothingness, for a point is an abstract concept of zero dimensions, not extended in space or time, and thus it cannot "contain" anything. At that point, nothing exists except the thinker who is trying to imagine nothingness.

If we could model thought as only an epiphenomenon of matter, reached at a certain degree of complexity, it has no fundamental reality of its own. In that case, our thought experiment to shrink the cosmos reaches a point at which thought is extinguished, and the experiment must stop, if it is to follow the "rules" that it is modelling. However, by accepting that thought might have a reality of its own, and by considering the problem from a whole-system perspective, we were able to continue the thought experiment to the point at which only the thought remains. The epiphenomenon idea is not an adequate model of reality, since we can indeed continue the experiment under the conditions outlined.

This "negative proof" is indirect, serving only to eliminate the epiphenomenon model. It does not prove that there is an independent and fundamental reality beyond space-time and matter; the experiments supporting Bell's Theorem do that. This line of thinking, however, does lead us to suggest that thought be a primary aspect of reality. It seems that the cosmos itself is saying with Descartes, "I think, therefore I am."

Because of this inescapable "relativistic" connection between cosmos and thought, I cannot imagine creation ex nihilo (from nothing), for the concept of nothing always collides with the existence of the one who is the thinker. Nothing has any meaning apart from something. The dimension of thinking is required to imagine a zero-dimensional space-time.

The epiphenomenon model posits that nothing is defined as the absence of matter. If that is so, thought is nothing; However, if it were nothing, I could not be thinking that thought, so thought must be of something. There can be no nothingness, for even if all that exists is reduced to nothingness, a dimension of reality remains. Reality requires at least one dimension in addition to space-time and that reality seems inseparable from the dimension of thought.

What is missing from our existing scheme of dimensions is a description of that dimension that we could not eliminate by playing the videotape of creation in reverse: that reality at the SuperPaint from which the dimension of thought cannot be separated. That leads to a rather extravagant and intuitive proposal, following Anaxagoras: Thought is the missing particle, the missing dimension.

Quantum physics already acknowledges the importance of consciousness as "observer." Consciousness is the substrate of thought. Thought is consciousness dimensionally extended, whether in time or some other dimension. Thought is process. Any unification of the laws of physics must necessarily take into account the thought/consciousness dimension, and thus must unify physics with psyche as well.

In his book. The self-aware Universe, Admit Goswami uses the term consciousness to mean transcendental consciousness, which forms (or is) the nonlocal reality. Other physicists seem to define the term of cautiously, and one often wonders whether a given text about observer effect is referring to ordinary individual awareness, or to some more general property of psyche.

It is useful to preserve the important distinction between consciousness and unconsciousness. Psychologically, ordinary human consciousness is the realm of ego and the cognitive functions called mind. Neurologically it refers to a patient's observed state of awareness. The clinical unconscious is the realm of psyche, with both personal and collective aspects. Perhaps a better language will come along in time. Until then, let me suggest an interim language for discussing, and perhaps a framework for someday testing, the relationship between matter and psyche. Its proposal is that there is a unit of psyche, which I designate the neon, from the Greek word nous, for mind. Nuons represent the dimensions of thought that exist in (at, as) the SuperPaint defining the initial conditions of the Big Bang. As the domain of the force dimensions, those Nuons must be imagined to expand as a field or continuum (the nuocontinuum) as the space-time continuum expands, a "stretched sheet" with "waves" which are also Nuons. The Nuons of the SuperPaint are extended in space-time in a way conceptually analogous to the action of the forces.

Yet Nuons must also be construed as the domain of the symmetries, such as the principle of conservation of energy, which are nonlocal. That is, they are everywhere in effect, without being constrained by the speed-limit of light. The nuocontinuum thus represents a multidimensional bridge between forces, symmetries, and space-time. Nuons collectively contain all potentialities, but the collective (nuocontinuum) is the unit, itself the symmetry that unifies the forces and symmetries. The Nuons is the "infinity particle" which solves the formulas.

Does the nuocontinuum represent a fractal (fractional dimensions) such as those that give the mathematical order to the "chaos" images? Does it provide the prime tone of which the symmetries and the forces are harmonics? Whether construed mathematically or poetically, the nuocontinuum contains the information necessary to create a universe, but a universe that is organically creating itself.

Human awareness, which occurs at a level of extraordinary complexity in the organization of space-time particles, would involve, not a "creation" of consciousness as an epiphenomenon, but a sensing of a quality that is already there, as the reality dimension of the cosmos. The observer effect at the quantum level (and the health of Schr”dinger's cat) is then to be understood as an interaction, not with a particle of concrete matter, but with the reality substrate from which matter arises.

If we construe the whole nuocontinuum (rather than the experimenter) to be the "observer" of the quantum event in the box, we avoid much of the confusion and exasperation that Schroedinger's thought experiment evokes. Hawking wrote, "When I hear of Schroedinger's cat, and I reach for my gun." Even Einstein was repelled by quantum uncertainty. DeBroglie especially held out for an interpretation of quantum physics which supported concreteness. We rebel against the idea of a universe based on uncertainty, and we seek to assure ourselves that what we experience is a concrete reality.

However, if the nuocontinuum is the observer that resolves the quantum uncertainty, our own individual sense of uncertainty is also resolved. The collapse of the particle wave function (the coming into being of the particle at a particular point in space-time) would be a function of the nuocontinuum acting as a whole, rather than as a local observer. The nuocontinuum is the observer who actualized creation the cosmic event in the box prior to the development of human consciousness. It is that cosmic observer who unifies the quantum effects of the electronuclear forces and the cosmic effects of gravity.

The Nuocontinuum, thereupon, designates an unlimited, infinite connecting principle that binds all that is. Because it accounts for the material characteristics of the cosmos, it is "Creator." Because it presents itself through the agency of human consciousness, it may be sensed as Person and named Holy Spirit or Great Mystery. It is the source of that compelling "passion" of which Teilhard spoke, "to become one with the world that envelops us." Thus, though well beyond the scope of this article, the concept has implications for depth psychology and for theology. It has potential to help humans globally recapture a sense of meaning to human life, and to understand the experiences of those whose terminologies differ. Unless we do so, or at least critical masses of us do, we remain at great risk for destroying ourselves.

But its implications for the healing arts are also profound, for it makes us look at familiar concepts in quite a different light. In its affirmation of meaningful order in the cosmos as a whole, the nuocontinuum concept gives further definition and import to homeostasis as a healing, balancing principle that has more than physiological significance. When we invoke the term "placebo effect" we (usually unwittingly) are invoking a principle of the connectedness between an intervention and an effect, which now can be named and conceptualized. "Spontaneous remissions" of disease would be seen as something less than miracles but clearly more than merely chemical. After all, if physics can reach a limit to its powers of description, so too must be psychoneuroimmunology.

Practitioners, have become aware of the connectedness principle, we will become more aware that our own attitudes and approaches are significant to treatment outcomes and patient satisfaction. We will then realize that even though an experiment may be "doubly-blind" to some experimenters and to some persons being tested, there may be other influences outside the cause-effect "loop" and connections of which other persons may be conscious. Further, we will better understand that there are different levels of connectivity at work in every action, which require different levels of description to explain. And we might become more sensitive to patient's hopes and expectations that so are often stated in religious terms.

At this point in our harvest of knowledge, this synthesis is quite intuitive and speculative. However, even highly abstract drawings are often helpful in organizing thought. I hope that through some such synthesis as this, couched in whatever language, we will be given that courage to which Dossey eludes, to enter the "doorway through which we may encounter a radically new understanding of the physical world and our place in it." And, ones hope, assure the continued development of our abilities, together, to offer help to all in need of healing.

We collectively glorify our ability to think as the distinguishing characteristic of humanity; we personally and mistakenly glorify our thoughts as the distinguishing pattern of whom we are. From the inner voice of thought-as-words to the wordless images within our minds, thoughts create and limit our personal world. Through thinking we abstract and define reality, reason about it, react to it, recall past events and plan for the future. Yet thinking remains both woefully underdeveloped in most of us, as well as grossly overvalued. We can best gain some perspective on thinking in terms of energies.

We are hanging in language. We are suspended in language in such a way that we cannot say what is up and what is down, Niels Bohr lamented in the 1920s when confronted with the paradoxes, absurdities, and seeming impossibilities encountered in the then newly discovered quantum domain. The problem, he insisted, was not the quantum wonderland itself, but our language, our ways of thinking and talking about it. His colleague, Werner Heisenberg, went a step further and proclaimed that events in the quantum wonderland are not only unspeakable, they are unimaginable.

The same situation confronts today us when we try to talk about consciousness and how it relates to matter-energy. Go fishing for consciousness using the net of language and it always, inevitably, slips through the holes in our net. The limits of language-and imagination in talk about consciousness have been recently underlined, yet again, by the exchanges between philosopher Mark Woodhouse and physician Larry Dossey in the pages of Network.

Essentially, both men take opposing positions regarding the appropriateness of "energy talk" as a way of describing or explaining consciousness or mental phenomena. Woodhouse defends the use of energy talk (and proposes what he seems to think is a novel solution); Dossey denies the appropriateness of talking about consciousness in terms of energy. In for Woodhouse, consciousness is energy ("each is the other"); for Dossey, consciousness is not energy. As a philosopher passionately committed to exploring the relationship between consciousness and matter, between mind and body, and, specifically, the question "Can we have a science of consciousness?" I think the dialogue between Woodhouse and Dossey opens up a crucially important issue for philosophy of mind and for a science of consciousness. I believe the "energy question" is central to any significant advance we may make into understanding consciousness and how it relates to the physical world.

This relationship, is nevertheless, accredited by a double-aspect perspective: "Energy is the 'outside' of consciousness and consciousness is the 'inside' of energy throughout the universe." But making or that we have fallen into a fundamental philosophical error. As of urging to entice us for which we hold to bind of a particularly atypical sensibility for engaging the encounter with the narratives that belong to some "energy talk" about consciousness. But this study as at times happens to be of something to mention as a double-prospective that foregoes the most important point, and thereby fails to acknowledge what it is of true philosophically and by virtue of its existing character whose value we model.

A major challenge facing philosophers and scientists of consciousness (and anybody else who wishes to talk about it) is finding appropriate concepts, words and metaphors. So much of our language is derived from our two most dominant senses: vision and touch. Vision feeds language with spatial metaphors, while touch-or rather, kinesthetics-feeds language with muscular push-pull metaphors. The visuo-muscular senses dominate our perception and interaction with the world, and consequently metaphors derived from these senses dominate our ways of conceiving and talking about the world. It is no accident that spatial and mechanical descriptions and explanations predominate in physics-the paradigm science (and our culture's paradigm for all knowledge). Given our evolutionary heritage, with its selective bias toward vision and kinesthetics, we live predominantly in a spatial-push-pull world-the world of classical mechanics, a "billiard-ball" universe of moving, colliding, and recoiling massive bodies. Ours is a world of matter in motion, of things in space acted on by physical forces.

It should not be surprising, then, that when we come to talk about consciousness, our grooves of thinking channels us toward physics-talk-expressed today as "energy talk." Forces are felt-experienced in the body and we are tempted to think that the experience of force is identical to the energy exchanges between bodies described by physics. But this is to confuse the feeler's feeling (the subject) with what is felt (the object). More on this later.

Previously mentioned, was that the Woodhouse-Dossey debate highlights yet again the limits of language when we try to talk about consciousness. This problem is at least as old as Descartes' mind-body dualism (though, as we will see, it is not confined to Cartesian dualism-it is there, too, in forms of idealism known as the "Perennial Philosophy"). When Descartes made his famous distinction between mind and matter, he found himself "suspended in the language" of physics. He could find no better way to define mind than negatively in the terminology of physics. He defined matter as that which occupies space"res’ extensa," extended things. He defined the mental world as "res comitans," thinking things-and thinking things differ from physical things in that they do not occupy space. The problem was how could material, physical, things interact with nonphysical things? What conceivably could be the nature of their point of contact-material or mental? Centuries later, Freud, too, resorted to physics-energy talk when to specify the "mechanisms" and dynamics of the psyche-e.g. his concept of the libido. Today, the same tendency to use energy technologically to converse in talking, as Dossey points out, is rife in much new age talk about consciousness, soul, and spirit, exemplified in Woodhouse's article and his book Paradigm Wars.

Because of our reliance on the senses of vision and kinesthetics, we have an evolutionary predisposition, it seems, to talk in the language of physics or mechanics-and by that I mean "matter talk," or "energy talk." Yet all such talk seems to miss something essential when we come to speak of phenomena in the domain of the mind-for example, emotions, desires, beliefs, pains, and other felt qualities of consciousness. The inappropriate chunkiness of mechanistic metaphors borrowed from classical physics seems obvious enough. The mind just isn't at all like matter or machines, as Descartes was keenly aware. But then came Einstein's relativity, and the quantum revolution. First, Einstein's E = mc2 showed that matter was a form of energy, and so, with the advent of quantum theory, the material world began to dissolve into unimaginable, paradoxical bundles of energy or action. Matter itself was now understood to be a ghostly swirl of energy, and began to take on qualities formerly associated with mind. A great physicist, Sir James Jeans, even declared that "universe begins to look more like a great thought." Quantum events were so tiny, so undetermined, so un-mechanical in the classical sense, they seemed just the sort of thing that could respond to the influence of the mind.

The quantum-consciousness connection was boosted further by the need (at least in one interpretation of quantum theory) to include the observer (and his/her consciousness) in any complete description of the collapse of the quantum wave function. According to this view, the quantum system must include the consciousness of the observer. Ghostly energy fields from relativity and the quantum-consciousness connection triggered the imaginations of pop-science writers and dabblers in new age pseudo-science: Quantum theory, many believe, has finally opened the way for science to explore and talk about the mind. But the excitement was-and is-premature. It involves the linguistic and conceptual sleight-of-hand, whereas the clucky mechanical language that is in fact a matter that was obviously at best in metaphoric principles, just when applied to consciousness, it now seemed more reasonable to use the language of energy literally-particularly if cloaked in the "spooky" garb of quantum physics. But this shift from "metaphorical matter" to "literal energy" was unwarranted, unfounded, and deceptive.

Dissolving matter into energy makes neither of them are less conceptual. And the mark of the physical, as Descartes had pointed out, is that it is extended in space. Despite the insuperable problems with his dualism, Descartes' key insight remains valid: What distinguishes mind from matter is precisely that it does not occupy space. And this distinction holds just as fast between mind and energy-even so-called subtle energy (hypothetical "subtle energy" bodies are described as having extension, and other spatial attributes such as waves, vibrations, frequencies). Energy, even in the form of infinitesimal quanta or "subtle vibrations," still occupies space. And any theory of energy as a field clearly makes it spatial. Notions of "quantum consciousness" or "field consciousness"-and Woodhouse's "vibrations," "ripples," or "waves" of consciousness-therefore, are no more than vacuous jargon because they continue to fail to address the very distinction that Descartes formulated nearly four hundred years ago.

But that's not even the most troublesome deficiency of energy talk. It is equitably to suppose that physicists were proficient to show that quanta of energy did not occupy space; Suppose the behaviour of quanta was so bizarre that they could do all sorts of "non-physical" things-such as transcend space and time; Suppose that even if it could be shown that quanta were not "physical" in Descartes' sense . . . even supposing all of this, any proposed identity between energy and consciousness would still be invalid.

Energies talk fails to account for what is fundamentally most characteristic about consciousness, namely its subjectivity. No matter how fine-grained, or "subtle," energy could become, as an objective phenomenon it could never account for the fact of subjectivity-the "what-it-feels-like-from-within-experience." Ontologically, subjectivity cannot just emerge from wholly objective reality. Unless energy, at its ontologically most fundamental level, already came with some form of proto-consciousness, proto-experience, or proto-subjectivity, consciousness, experience, or subjectivity would never emerge or evolve in the universe.

Which brings us to Woodhouse's "energy monism" model, and the notion that "consciousness is the 'inside' of energy throughout the universe." Despite Dossey's criticism of this position, I think Woodhouse is here proposing a version of the only ontology that can account for a universe where both matter-energy and consciousness are real. He briefly summarizes why dualism, idealism, and materialism cannot adequately account for a universe consisting of both matter/energy and consciousness. (He adds "Epiphenomenalism" to these three as though it were distinctly ontological. It is not. Epiphenomenalism is a form of property dualism, which in turn is a form of materialism.) He then proceeds to outline a "fifth" alternative: "Energy monism." And although I believe his fundamental insight is correct, his discussion of this model in terms of double-aspectism falls victim to a common error in metaphysics: He confuses epistemology with ontology.

Woodhouse proposes that the weaknesses of the other ontologies-dualism, idealism, and materialism-can be avoided by adopting a "double-aspect theory that does not attempt to reduce either energy or consciousness to the other." And he goes on to build his alternative ontology on a double-aspect foundation. Now, I happen to be highly sympathetic with double-aspectism: It is a coherent and comprehensive (even "holistic") epistemology. As a way of knowing the world, double-aspectism opens up the possibility of a complementarity of subjective and objective perspectives.

But a perspective on the world yields epistemology-it reveals’ something about how we know what we know about the world. It does not reveal the nature of the world, which is the aim of ontology. Woodhouse makes an illegitimate leap from epistemology to ontology when he says, "This [energy monism] is a dualism of perspective, not of fundamental stuff," and concludes that "each is the other." Given his epistemological double-aspectism, the best Woodhouse can claim to be an ontological agnostic (as, in fact, Dossey does). He can talk about viewing the world from two complementary perspectives, but he cannot talk about the nature of the world in itself. Certainly, he cannot legitimately conclude from talk about aspects or perspectives that the ultimate nature of the world is "energy monism" or that "consciousness is energy." Epistemology talk cannot yield ontology talk-as Kant, and later Bohr, were well aware. Kant said we cannot know the thing-in-itself. The best we can hope for is to know some details about the instrument of knowing. Bohr said that the task of quantum physics is not to describe reality as it is in itself, but to describe what we can say about reality.

The issue of whether energy talk is appropriate for consciousness is to resolve ontologically not epistemological ly. At issue is whether consciousness is or is not a form of energy-not whether it can be known from different perspectives. If it is a form of energy, then energy talk is legitimate. If not, energy talk is illegitimate. But the nature of consciousness is not to be "determined by perspective," as Woodhouse states: "insides and outsides are determined by perspectives." If "insides" (or "outsides") were merely a matter of perspective, then any ontology would do, as long as we allowed for epistemological dualism or complementarity (though, of course, the meaning of "inside" and "outside" would differ according to each ontology). What Woodhouse doesn't do (which he needs to do to make his epistemology grow ontological legs) has established an ontology compatible with his epistemology of "inside" and "outside." In short, he needs to establish an ontological distinction between consciousness and energy. But this is precisely what Woodhouse aim to avoid with his model of energy monism. Dossey is right, I think, to describe energy talk about consciousness as a legacy of Newtonian physics (i.e., of visuo-kinesthetic mechanics). This applies equally to "classical energy talk," "quantum-energy talk," "subtle-energy talk," and Woodhouse's "dual-aspect energy talk." In an effort to defend energy talk about consciousness, Woodhouse substitutes epistemology for ontology, and leaves the crucial issue unresolved.

Unless Woodhouse is willing to ground his double-aspect epistemology in an ontological complementarity that distinguishes mind from matter, but does not separate them, he runs the risk of unwittingly committing "reductionism all over again"-despite his best intentions. In fact, Woodhouse comes very close to proposing just the kind of complementary ontology his model needs: "Consciousness isn't just a different level or wave form of vibrating energy; it is the 'inside' of energy-the pole of interiority perfectly understandable to every person who has had a subjective experience of any kind" (emphasis added). This is ontology talk, not epistemology talk. Woodhouse's error is to claim that the distinction "inside" (consciousness) and "outside" (energy) is merely a matter of perspective.

In order to defend his thesis of "energy monism," Woodhouse seems to want it both ways. On the one hand, he talks of being conscious and energy being ontologically identical"each is the other"; on the other, he makes a distinction between consciousness and energy: Energy is the 'outside' of consciousness and consciousness is the 'inside' of energy. He attempts to avoid the looming contradiction of consciousness and energy being both "identical yet distinct" by claiming that the identity is ontological while the distinction is epistemological. But the distinction cannot be merely epistemological-otherwise, as already pointed out, any ontology would do. But this is clearly not Woodhouse's position. Energy monism, as proposed by Woodhouse, is an ontological claim. Woodhouse admits as much when he calls energy monism "a fifth alternative" to the ontologism of dualism, idealism, materialism (and Epiphenomenalism [sic]) which he previously dismissed.

Furthermore, Woodhouse "inside" and "outside" are not merely epistemological when he means them to be synonyms for "subjectivity" and "objectivity" respectively. Although subjectivity and objectivity are epistemological perspectives, they are not only that. Subjectivity and objectivity can have epistemological meaning only if they refer to some implications of a primary ontological distinction-between what Sartre (1956) called the "for-itself" and the "in-itself," between that which feels and that which is felt. Despite his claims to the contrary, Woodhouse's distinction between "inside" and "outside" is ontological-not mere epistemological. And as an ontological distinction between consciousness and energy, it is illegitimate to conclude from his double-aspect epistemology the identity claim that "consciousness is energy." Woodhouse's consciousness-energy monism confusion, it seems to me, is a result of: (1) a failure to distinguish between non-identity and separation, and (2) a desire to avoid the pitfalls of Cartesian dualism. The first is a mistake, the second is not-but he conflates the two. He seems to think that if he allows for a non-identity between consciousness and energy this is tantamount to their being ontologically separate (as in Cartesian dualism). But (1) does not encompass that of (2): Ontological distinction does not entail separation. It is possible to distinguish two phenomena (such as the form and substance of a thing), yet recognize them as inseparable elements of a unity. Unity does not mean identity, and distinction does not mean separation. (I will return to this point shortly.) This muddle between epistemology and ontology is my major criticism of Woodhouse's position. Though if he had the courage or foresight to follow through on his epistemological convictions, and recognize that his position is compatible with (and would be grounded by) an ontological complementarity of consciousness and energy.

The ontological level of understanding (though explicitly denied) in Woodhouse's double-aspect model-where consciousness ("inside") and energy (“outside”) is actual throughout the universe is none other than panpsychism, or what has been variously called pan experientialism (Griffin, 1997) and radical materialism (de Quincey, 1997). It is the fourth alternative to the major ontologism of dualism, idealism, and materialism, and has a very long lineage in the Western philosophical tradition-going all the way back to Aristotle and beyond to the Presocratics. Woodhouse does not acknowledge any of this lineage, as if his double-aspect model was a novel contribution to the mind-matter debate. Besides Aristotle's hylemorphism, he could have referred to Leibniz' monads, Whitehead "actual occasion," and de Chardin's "tangential energy" and the "within" as precursors to the distinction he makes between "inside" and "outside." This oversight weakens the presentation of his case. Of course, to have introduced any or all of these mind-body theories would have made Woodhouse's ontological omission all the more noticeable.

One other weakness in Woodhouse's article is his reference to the Perennial Philosophy and the Great Chain of Being as supportive of energy talk that unites spiritual and physical realities. "The non-dual Source of some spiritual traditions . . . is said to express itself energetically (outwardly) on different levels in the Great Chain of Being (matter being the densest form of energy) . . ." Woodhouse is here referring to the many variations of idealist emanationism, where spirit is said to pour itself forth through a sequence of ontological levels and condense into matter. But just as I would say Woodhouse's energy monism unwittingly ultimately entails physicalist reductionism, my criticism of emanationism is that it, too, ultimately "physicalizes" spirit-which no idealist worth his or her salt would want to claim. Energy monism runs the same risk of "physicalizing" spirit as emanationism. So I see no support for Woodhouse's position as an alternative to dualism or materialism coming from the Perennial Philosophy. Both run the risk of covert dualism or covert materialism.

Dossey's critique of Woodhouse's energy monism and energy talk, particularly his caution not to assume that the "nonlocal" phenomena of quantum physics are related to the "nonlocal" phenomena of consciousness and distant healing other than a commonalty of terminology is sound. The caution is wise. However, his critique of Woodhouse's "inside" and "outside" fails to address Woodhouse's confusing epistemology and ontology. If Dossey saw that Woodhouse's intent was to confine the "inside/outside" distinction to epistemology, he might not have couched his critique in ontological terms. Dossey says, "By emphasizing inside and outside, interior and exterior, we merely create new boundaries and interfaces that require their own explanations." The "boundaries and interfaces" Dossey is talking about being ontological, not epistemological. And to this extent, Dossey's critique misses the fact that Woodhouse is explicitly engaged in epistemology talk. On the other hand, Dossey is correct to assume that Woodhouse's epistemological distinction between "inside and outside" necessarily implies an ontological distinction-between "inside" (consciousness) and "outside" energy.

Dossey's criticism of Woodhouse's energy monism, thus, rests on an ontological objection: Even if we do not yet have any idea of how to talk ontologically about consciousness, we at least know that (despite Woodhouse's contrary claim) consciousness and energy are not ontologically identical. There is an ontological distinction between "inside/consciousness" and "outside/energy." Thus, Dossey concludes, energy talk (which is ontological talk) is inappropriate for consciousness. On this, I agree with Dossey, and disagree with Woodhouse. However, Dossey goes on to take issue with Woodhouse's "inside/outside" distinction as a solution to the mind-body relation. If taken literally, Dossey's criticism is valid: "Instead of grappling with the nature of the connection between energy and consciousness, we are now obliged to clarify the nature of the boundary between 'inside' and 'outside' . . ." But I suspect that Woodhouse uses the spatial concepts "inside/outside" metaphorically because like the rest of us he finds our language short on nonphysical metaphors (though, as we will see, nonspatial metaphors are available).

It may be, of course, that Woodhouse has not carefully thought through the implications of this spatial metaphor, and how it leaves him open to just the sort of critique that Dossey levels. Dossey, I presume, is as much concerned with Woodhouse's claim that "consciousness is energy," meaning it is the "inside" of energy, as he is about the difficulties in taking the spatial metaphor of "inside/outside" literally. On the first point, I share Dossey's concern. I am less concerned about the second. As long as we remember that talk of "interiority" and "exteriority" are metaphors, I believe they can be very useful ways of pointing toward a crucial distinction between consciousness and energy.

The metaphor becomes a problem if we slip into thinking that it points to a literal distinction between two kinds of "stuff" (as Descartes did), or indeed to a distinction revealing two aspects of a single kind of "stuff." This latter slip seems to be precisely the mistake that Woodhouse makes with his energy monism. By claiming that consciousness is energy, Woodhouse in effect-despite his best intentions to the contrary-succeeds in equating (and this means "reducing") consciousness to physical "stuff." His mistake-and one that Dossey may be buying into-is to use "stuff-talk" for consciousness. It is a logical error to conclude from (1) there is only one kind of fundamental "stuff" (call it energy), and (2) this "stuff" has an interiority (call it consciousness), that (3) the interiority is also composed of that same "stuff” -, i.e., that consciousness is energy. It could be that "interiority/consciousness" is not "stuff" but something more collectively distinct ontologically-for examples, feeling or process-something which is intrinsic to, and therefore inseparable from, the "stuff." It could be that the world is made up of stuff that feels, where there is an ontological distinction between the feeling (subjectivity, experience, consciousness) and what is felt (objectivity, matter-energy).

Dossey's rejection of the "inside/outside" metaphor seems to presume (à la Woodhouse) that "inside" means the interior of some "stuff" and is that "stuff"-in this case, energy-stuff. But that is not the position of panpsychist and process philosophers from Leibniz down through Bergson, James, and Whitehead, to Hartshorns and Griffin. If we make the switch from a "stuff-oriented" to a process oriented ontology, then the kind of distinction between consciousness and energy dimly implicit in Woodhouse's model avoids the kind of criticism that Dossey levels at the "inside/outside" metaphor. Process philosophers prefer to use "time-talk" over "space-talk." Instead of talking about consciousness in terms of "insides," they talk about "moments of experience" or "duration." Thus, if we view the relationship between consciousness and energy in terms of temporal processes rather than spatial stuff, we can arrive at an ontology similar to Whiteheads relationship between consciousness and energy is understood as temporal. It is the relationship between subjectivity and objectivity, where the subject is the present state of an experiential process, and the object is its prior state. Substitute "present" for "interior" and "past" or "prior" for "exterior" and we have a process ontology that avoids the "boundary" difficulties raised by Dossey. (There is no boundary between past and present-the one flows into the other; the present incorporates the past.) From the perspective of panpsychism or radical materialism, consciousness and energy, mind and matter, subject and object always go together. All matter-energy is intrinsically sentient and experiential. Sentience-consciousness and matter-energy are inseparable, but nevertheless distinct. On this view, consciousness is the process of matter-energy informing itself.

Although our language is biassed toward physics-energy talk, full of mechanistic metaphors, this is clearly not the whole story. The vernacular of the marketplace, as well as the language of science itself, is also rich with non-mechanistic metaphors, metaphors that flow direct from experience itself. Ironically, not only do we apply these consciousness metaphors to the mind and mental events, but also to the world of matter in our attempts to understand its deeper complexities and dynamics. For example, systems theory and evolutionary biology-even at the reductionist level of molecular genetics-are replete with words such as "codes," "information," "meaning," "self-organizing," and the p-word: "purpose." So we are not limited to mechanistic metaphors when describing either the world of matter or the world of mind. But-and this is the important point-because of our bias toward visuo-muscular images, we tend to forget that metaphors of the mind are sui generis, and, because of our scientific and philosophical bias in favour of a mechanism, we often attempt to reduce metaphors of the mind to metaphors of matter. My proposal for consciousness talk is this: Recognize the limitations of mechanistic metaphors, and the inappropriateness of literal energy talk, when discussing consciousness. Instead, acknowledge the richness and appropriateness of metaphors of meaning when talking about the mind. In short: Drop mechanistic metaphors (energy talk) and take up meaning metaphors (consciousness talk) when talking about consciousness.

One of the thorniest issues in "energy" and "consciousness" work is the tendency to confuse the two. Consciousness does not equal energy, yet the two are inseparable. Consciousness is the "witness" which experiences the flow of energy, but it is not the flow of energy. We might say consciousness is the felt interiority of energy/matter - but it is not energy.

If we say that consciousness is a form of energy, then we have two options. Either It is a physical form of energy (even if it is very subtle energy), or It is not a physical form of energy. If we say that consciousness is a form of energy that is physical, then we are reducing consciousness (and spirit) to physics. And few of us, unless we are materialists, want to do that. If we say that consciousness is a form of energy that is not physical, then we need to say in what way psychic energy differs from physical energy. If we cannot explain what we mean by "psychic energy" and how it is different from physical energy, in that then we should ask ourselves why use the term "energy" at all? Our third alternative is to say that consciousness is not a form of energy (physical or nonphysical). This is not to imply that consciousness has nothing to do with energy. In fact, the position I emphasize in my graduate classes is that consciousness and energy always go together. They cannot ever be separated. But this is not to say they are not distinct. They are distinct-energy is energy, consciousness is consciousness-but they are inseparable (like two sides of a coin, or, better, like the shape and substance of a tennis ball. You can't separate the shape from the substance of the ball, but shape and substance are definitely distinct).

So, for example, if someone has a kundalini experience, they may feel a rush of energy up the chakra system . . . but to say that the energy flow is consciousness is to mistake the object (energy flow) for the subject, for what perceives (consciousness) the object. Note the two importantly distinct words in the phrase "feel the rush of energy . . . " On the one hand there is the "feeling" (or the "feeler"), on the other, there is what is being felt or experienced (the energy). Even our way of talking about it reveals that we detect a distinction between feeling (consciousness) and what we feel (energy). Yes, the two go together, but they are not the same. Unity, or unification, or holism, does not equal identity. To say that one aspect of reality (say, consciousness) cannot be separated from another aspect of reality (say, matter-energy) is not to say both aspects of reality (consciousness and matter-energy) are identical.

Consciousness, is neither identical to energy (monism) nor it a separate substance or energy in addition to physical matter or energy (dualism)-it is the "interiority," the what-it-feels-like-from-within, the subjectivity that is intrinsic to the reality of all matter and energy (panpsychism or radical materialism). If you take a moment to pay attention to what's going on in your own body right now, you'll see-or feel-what I mean: The physical matter of your body, including the flow of whatever energies are pulsing through you, is the "stuff" of your organism. But there is also a part of you that is aware of, or feels, the pumping of your blood (and other energy streams). That aspect of you that feels the matter-energy in your body is your consciousness. We could express it this way: "Consciousness is the process of matter-energy informing itself." Consciousness is the ability that matter-energy has to feel, to know, and to direct itself. The universe could be (and probably is) full of energy flows, vortices, and vibrations, but without consciousness, all this activity would be completely unfelt and unknown. Only because there is consciousness can the flow of energy be felt, known, and purposefully directed.

Over the past three decades, philosophy of science has grown increasingly "local." Concerns have switched from general features of scientific practice to concepts, issues, and puzzles specific to particular disciplines. Philosophy of neuroscience is a natural result. This emerging area was also spurred by remarkable recent growth in the neuroscience. Cognitive and computational neuroscience continues to encroach upon issues traditionally addressed within the humanities, including the nature of consciousness, action, knowledge, and normativity. Empirical discoveries about brain structure and function suggest ways that "naturalistic" programs might develop in detail, beyond the abstract philosophical considerations in their favour

The literature distinguishes "philosophy of neuroscience" and "neurophilosophy." The former concern foundational issues within the neuroscience. The latter concerns application of neuroscientific concepts to traditional philosophical questions. Exploring various concepts of representation employed in neuroscientific theories is an example of the former. Examining implications of neurological syndromes for the concept of a unified self is an example of the latter. In this entry, we will assume this distinction and discuss examples of both.

Contrary to some opinion, actual neuroscientific discoveries have exerted little influence on the details of materialist philosophies of mind. The "neuroscientific milieu" of the past four decades has made it harder for philosophers to adopt dualism. But even the "type-type" or "central state" identity theories that rose to brief prominence in the late 1950s drew upon few actual details of the emerging neuroscience. Recall the favourite early example of a psychoneural identity claim: pain is identical to C-fibre firing. The "C fibres" turned out to be related to only a single aspect of pain transmission. Early identity theorists did not emphasize psychoneural identity hypotheses, admitting that their "neuro" terms were placeholder for concepts from future neuroscience. Their arguments and motivations were philosophical, even if the ultimate justification of the program was held to be empirical.

The apology for this lacuna by early identity theorists was that neuroscience at that time was too nascent to provide any plausible identities. But potential identities were afoot. David Hubel and Torsten Wiesel's (1962) electro physiological demonstrations of the receptive field properties of visual neurons had been reported with great fanfare. Using their techniques, neuro physiologists began discovering neurons throughout visual cortex responsive to increasingly abstract features of visual stimuli: from edges to motion direction to colours to properties of faces and hands. More notably, Donald Hebb had published The Organization of Behaviour (1949) a decade earlier. Therein he offered detailed explanations of psychological phenomena in terms of known neural mechanisms and anatomical circuits. His psychological explananda included features of perception, learning, memory, and even emotional disorders. He offered these explanations as potential identities. One philosopher did take note of some available neuroscientific detail was Barbara Von Eckardt-Klein (1975). She discussed the identity theory with respect to sensations of touch and pressure, and incorporated then-current hypotheses about neural coding of sensation modality, intensity, duration, and location as theorized by Mountcastle, Libet, and Jasper. Yet she was a glaring exception. Largely, available neuroscience at the time was ignored by both philosophical friends and foes of early identity theories.

Philosophical indifference to neuroscientific detail became "principled" with the rise and prominence of functionalism in the 1970s. The functionalists' favourite argument was based on multiple reliability: a given mental state or event can be realized in a wide variety of physical types (Putnam, 1967 and Fodor, 1974). So a detailed understanding of one type of realizing physical system (e.g., brains) will not shed light on the fundamental nature of mind. A psychological state-type is autonomous from any single type of its possible realizing physical mechanisms. Instead of neuroscience, scientifically-minded philosophers influenced by functionalism sought evidence and inspiration from cognitive psychology and "program-writing" artificial intelligence. These disciplines résumé being of themselves away from underlying physical mechanisms and emphasize the "information-bearing" properties and capacities of representations (Haugeland, 1985). At this same time neuroscience was delving directly into cognition, especially learning and memory. For example, Eric Kandel (1976) proposed parasynaptic mechanisms governing transmitter release rates as a cell-biological explanation of simple forms of associative learning. With Robert Hawkins (1984) he demonstrated how cognitivist aspects of associative learning (e.g., Forming, second-order conditioning, overshadowing) could be explained cell-biologically by sequences and combinations of these basic forms implemented in higher neural anatomies. Working on the postsynaptic side, neuroscientists began unravelling the cellular mechanisms of long term potentiation (LTP). Physiological psychologists quickly noted its explanatory potential for various forms of learning and memory. Yet few "materialist" philosophers paid any attention. Why should they? Most were convinced functionalists, who believed that the "engineering level" details might be important to the clinician, but were irrelevant to the theorist of mind.

A major turning point in philosophers' interest in neuroscience came with the publication of Patricia Churchland's Neurophilosophy (1986). The Churchlands (Pat and husband Paul) were already notorious for advocating eliminative materialism. In her (1986) book, Churchland distilled eliminativist arguments of the past decade, unified the pieces of the philosophy of science underlying them, and sandwiched the philosophy between a five-chapter introduction and neuroscience and a 70-page chapter on three then-current theories of brain function. She was unapologetic about her intent. She was introducing philosophy of science to neuroscientists and neuroscience to philosophers. Nothing could be more obvious, she insisted, than the relevance of empirical facts about how the brain works to concerns in the philosophy of mind. Her term for this interdisciplinary method was "co-evolution" (borrowed from biology). This method seeks resources and ideas from anywhere on the theory hierarchy above or below the question at issue. Standing on the shoulders of philosophers like Quine and Sellars, Churchland insisted that specifying some point where neuroscience ends and philosophy of science begins is hopeless because the boundaries are poorly defined. neuro philosophers would carefully choose resources from both disciplines as they saw fit.

Three themes predominate Churchlands philosophical discussion: Developing an alternative to the logical empiricist theory of intertheoretic cause to be connected to property-dualistic arguments based on subjectivity and sensory qualia, and responding to anti-reductionist multiple reliability arguments. These projects have remained central to neurophilosophy over the past decade. John Bickle (1998) extends the principal insight of Clifford Hooker's (1981) post-empiricist theory of intertheoretic reduction. He quantifies key notions using a model-theoretic account of theory structure adapted from the structuralist program in philosophy of science. He also makes explicit the form of argument scientist’s employ to draw ontological conclusions (cross-theoretic identities, revisions, or eliminations) based on the nature of the intertheoretic reduction relations obtaining in specific cases. For example, physicists concluded that visible light, a theoretical posit of optics, is electromagnetic radiation within specified wavelengths, a theoretical posit of electromagnetism: a cross-theoretic ontological identity. In another case, however, chemists concluded that phlogiston did not exist: an elimination of a kind from our scientific ontology. Bickle explicates the nature of the reduction relation in a specific case using a semi-formal account of ‘an interior theoretic approximation’ inspired by structuralist results. Paul Churchland (1996) has carried on the attack on property-dualistic arguments for the ir reducibility of conscious experience and sensory qualia. He argues that acquiring some knowledge of existing sensory neuroscience increases one's ability to ‘imagine’ or ‘conceive of’ a comprehensive neurobiological explanation of consciousness. He defends this conclusion using a thought-experiment based on the history of optics and electromagnetism. Finally, the literature critical of the multiple reliability argument has begun to flourish. Although the multiple reliability argument remains influential among nonreductive physicalists, it no longer commanded the universal acceptance it once did. Replies to the multiple reliability argument based on neuroscientific details have appeared. For example, William Bechtel and Jennifer Mundale (1997, in press) argue that neuroscientists use psychological criteria in brain mapping studies. This fact undercuts the likelihood that psychological kinds are multiplying realized.

Eliminative materialism (EM) is the conjunction of two claims. First, our common sense ‘belief-desire’ conception of mental events and processes, our ‘folk psychology,’ is a false and misleading account of the causes of human behaviour. Second, like other false conceptual frameworks from both folk theory and the history of science, it will be replaced by, rather than smoothly reduced or incorporated into, a future neuroscience. Folk psychology is the collection of common homilies about the causes of human behaviour. You ask me why Marica is not accompanying me this evening. I reply that her grant deadline is looming. You nod sympathetically. You understand my explanation because you share with me a generalization that relates beliefs about looming deadlines, desires about meeting professionally and financially significant ones, and ensuing free-time behaviour. It is the collection of these kinds of homilies that EM claims to be flawed beyond significant revision. Although this example involves only beliefs and desires, folk psychology contains an extensive repertoire of propositional attitudes in its explanatory nexus: hopes, intentions, fears, imaginings, and more. To the extent that scientific psychology (and neuroscience) retains folk concepts, EM applies to it as well.

EM is physicalist in the classical sense, postulating some future brain science as the ultimately correct account of (human) behaviour. It is eliminative in predicting the future removal of folk psychological kinds from our post-neuroscientific ontology. EM proponents often employ scientific analogies. Oxidative reactions as characterized within elemental chemistry bear no resemblance to phlogiston release. Even the "direction" of the two processes differ. Oxygen is gained when an object burns (or rusts), phlogiston was said to be lost. The result of this theoretical change was the elimination of phlogiston from our scientific ontology. There is no such thing. For the same reasons, according to EM, continuing development in neuroscience will reveal that there are no such things as beliefs and desires as characterized by common sense.

Here we focus only on the way that neuroscientific results have shaped the arguments for EM. Surprisingly, only one argument has been strongly influenced. (Most arguments for EM stress the failures of folk psychology as an explanatory theory of behaviour.) This argument is based on a development in cognitive and computational neuroscience that might provide a genuine alternative to the representations and computations implicit in folk psychological generalizations. Many eliminative materialists assume that folk psychology is committed to propositional representations and computations over their contents that mimic logical inferences. Even though discovering such an alternative has been an eliminativist goal for some time, neuroscience only began delivering on this goal over the past fifteen years. Points in and trajectories through vector spaces, as an interpretation of synaptic events and neural activity patterns in biological neural networks are key feature of this development. This argument for EM hinges on the differences between these notions of cognitive representation and the propositional attitudes of folk psychology (Churchland, 1987). However, this argument will be opaque to those with no background in contemporary cognitive and computational neuroscience, so we need to present a few scientific details. With these details in place, we will return to this argument for EM.

At one level of analysis the basic computational element of a neural network (biological or artificial) is the neuron. This analysis treats neurons as simple computational devices, transforming inputs into output. Both neuronal inputs and outputs reflect biological variables. For the remainder of this discussion, we will assume that neuronal inputs are frequencies of action potentials (neuronal "spikes") in the axons whose terminal branches synapse onto the neuron in question. Neuronal output is the frequency of action potentials in the axon of the neuron in question. A neuron computes its total input (usually treated mathematically as the sum of the products of the signal strength along each input line times the synaptic weight on that line). It then computes a new activation state based on its total input and current activation state, and a new output state based on its new activation value. The neuron's output state is transmitted as a signal strength to whatever neurons on which its axon synapses. The output state reflects systematically the neuron's new activation state.

Analysed at this level, both biological and artificial neural networks are interpreted naturally as vector-to-vector transformers. The input vector consists of values reflecting activity patterns in axons synapsing on the network's neurons from outside (e.g., from sensory transducers or other neural networks). The output vector consists of values reflecting the activity patterns generated in the network's neurons that project beyond the net (e.g., to motor effectors or other neural networks). Given that neurons' activity depends partly upon their total input, and total input depends partly on synaptic weights (e.g., parasynaptic neurotransmitter release rate, number and efficacy of postsynaptic receptors, availability of enzymes in synaptic cleft), the capacity of biological networks to change their synaptic pressures to initiate a plastic vector-to-vector transformer. In principle, a biological network with plastic synapses can come to implement any vector-to-vector transformation that its composition permits (number of input units, output units, processing layers, recurrence, cross-connections, etc.)

The anatomical organization of the cerebellum provides a clear example of a network amendable to this computational interpretation. The cerebellum is the bulbous convoluted structure dorsal to the brainstem. A variety of studies (behavioural, neuropsychological, single-cell electros), implicate this structure in motor integration and fine motor coordination. Mossy fibres (axons) from neurons outside the cerebellum synapse on cerebellular granule cells, which in turn project to parallel fibres. Activity patterns’ across the collection of mossy fibres (frequency of action potentials per time unit in each fibre projecting into the cerebellum) provide values for the input vector. Parallel fibres make multiple synapses on the dendritic trees and cell bodies of cerebellular Purkinje neurons. Each Purkinje neuron "sums" its post-synaptic potentials (PSPs) and emits a train of action potentials down its axon based (partly) on its total input and previous activation state. Purkinje axons project outside the cerebellum. The network's output vectors is thus the ordered values representing the pattern of activity generated in each Purkinje axon. Changes to the efficacy of individual synapses on the parallel fibres and the Purkinje neurons alter the resulting PSPs in Purkinje axons, generating different axonal spiking frequencies. Computationally, this amounts to a different output vector to the same input activity pattern (plasticity).

This interpretation puts the useful mathematical resources of dynamical systems into the hands of computational neuroscientists. Vector spaces are an example. For example, learning can be characterized fruitfully in terms of changes in synaptic weights in the network and subsequent reduction of error in network output. (This approach goes back to Hebb, 1949, although within the vector-space interpretation that follows.) A useful representation of this account is on a synaptic weight-error space, where one dimension represents the global error in the network's output to a given task, and all other dimensions represent the weight values of individual synapses in the network. Points in this multidimensional state space represent the global performance error correlated with each possible collection of synaptic weights in the network. As the weights change with each performance (in accordance with a biologically-implemented learning algorithm), the global error of network performance continually decreases. Learning is represented as synaptic weight changes correlated with a descent along the error dimension in the space (Churchland and Sejnowski, 1992). Representations (concepts) can be portrayed as partitions in multidimensional vector spaces. An example is a neuron activation vector space. A graph of such a space contains one dimension for the activation value of each neuron in the network (or some subset). A point in this space represents one possible pattern of activity in all neurons in the network. Activity patterns generated by input vectors that the network has learned to group together will cluster around a (hyper-) point or sub volume in the activity vector space. Any input pattern sufficiently similar to this group will produce an activity pattern lying in geometrical proximity to this point or sub volume. Paul Churchland (1989) has argued that this interpretation of network activity provides a quantitative, neurally-inspired basis for prototype theories of concepts developed recently in cognitive psychology.

Using this theoretical development, has offered a novel argument for EM. According to this approach, activity vectors are the central kind of representation and vector-to-vector transformations are the central kind of computation in the brain. This contrasts sharply with the propositional representations and logical/semantic computations postulated by folk psychology. Vectorial content is unfamiliar and alien to common sense. This cross-theoretic difference is at least as great as that between oxidative and phlogiston concepts, or kinetic-corpuscular and caloric fluid heat concepts. Phlogiston and caloric fluid are two "parade" examples of kinds eliminated from our scientific ontology due to the nature of the intertheoretic relation obtaining between the theories with which they are affiliated and the theories that replaced these. The structural and dynamic differences between the folk psychological and emerging cognitive neuroscientific kinds suggest that the theories affiliated with the latter will also correct significantly the theory affiliated with the former. This is the key premise of an eliminativist argument based on predicted intertheoretic relations. And these intertheoretic contrasts are no longer just an eliminativist's goal. Computational and cognitive neuroscience has begun to deliver an alternative kinematics for cognition, one that provides no structural analogue for the propositional attitudes.

Certainly the replacement of propositional contents by vectorial alternatives implies significant correction to folk psychology. But does it justifies EM? Even though this central feature of folk-psychologically posits in the finding of no analogues in one hot theoretical development in recent cognitive and computational neuroscience, there might be other aspects of cognition that folk psychology gets right. Within neurophilosophy, concluding that a cross-theoretic identity claim is true (e.g., folk psychological state F is identical to neural state N) or that an eliminativist claim is true (there is no such thing as folk psychological state F) depends on the nature of the intertheoretic reduction obtaining between the theories affiliated with the posits in question. But the underlying account of intertheoretic reduction recognizes a spectrum of possible reductions, ranging from relatively "smooth" through "significantly revisionary" to "extremely bumpy." Might the reduction of folk psychology and a "vectorial" neurobiology occupy the middle ground between "smooth" and "bumpy" intertheoretic reductions, and hence suggest a "revisionary" conclusion? The reduction of classical equilibrium thermodynamics to statistical mechanics to microphysics provides a potential analogy. John Bickle argues on empirical grounds that such a outcome is likely. He specifies conditions on "revisionary" reductions from historical examples and suggests that these conditions are obtaining between folk psychology and cognitive neuroscience as the latter develops. In particular, folk psychology appears to have gotten right the grossly-specified functional profile of many cognitive states, especially those closely related to sensory input and behavioural output. It also appears to get right the "intentionality" of many cognitive states - the object that the state is of or about - even though cognitive neuroscience eschews its implicit linguistic explanation of this feature. Revisionary physicalism predicts significant conceptual change to folk psychological concepts, but denies total elimination of the caloric fluid-phlogiston variety.

The philosophy of science is another area where vector space interpretations of neural network activity patterns have impacted philosophy. In the Introduction to his (1989) book, Paul Churchland asserts that it will soon be impossible to do serious work in the philosophy of science without drawing on empirical work in the brain and behavioural sciences. To justify this claim, he suggests neurocomputational reformulation of key concepts from this area. At the heart is a neurocomputational account of the structure of scientific theories. Problems with the orthodox "sets-of-sentences" view have been known for more than three decades. Churchland advocates replacing the orthodox view with one inspired by the "vectorial" interpretation of neural network activity. Representations implemented in neural networks (as discussed above) compose a system that corresponds to important distinctions in the external environment, are not explicitly represented as such within the input corpus, and allow the trained network to respond to inputs in a fashion that continually reduces error. These are exactly the functions of theories. Churchland is bold in his assertion: an individual's theory-of-the-world is a specific point in that individual's error-synaptic weight vector space. It is a configuration of synaptic weights that partitions the individual's activation vector space into subdivisions that reduce future error messages to both familiar and novel inputs.

This reformulation invites an objection, however. Churchland boasts that his theory of theories is preferable to existing alternatives to the orthodox "sets-of-sentences" account - for example, the semantic view (Suppe, 1974; van Fraassen, 1980) - because his is closer to the "buzzing brains" that use theories. But as Bickle notes, neurocomputational models based on the mathematical resources described above are a long way into the realm of abstractia. Even now, they remain little more than novel (and suggestive) applications of the mathematics of quasi-linear dynamical system to simplified schemata of brain circuitries. neuro philosophers owe some account of identifications across ontological categories before the philosophy of science community will accept the claim that theories are points in high-dimensional state spaces implemented in biological neural networks. (There is an important methodological assumption lurking in this objection.

Churchlands neurocomputational reformulation of scientific and epistemological concepts build on this account of theories. He sketches "neutralized" accounts of the theory-ladenness of perception, the nature of concept unification, the virtues of theoretical simplicity, the nature of Kuhnian paradigms, the kinematics of conceptual change, the character of abduction, the nature of explanation, and even moral knowledge and epistemological normativity. Conceptual redeployment, for example, is the activation of an already-existing prototype representation - a counterpoint or region of a partition of a high-dimensional vector space in a trained neural network - a novel type of input pattern. Obviously, we can't here do justice to Churchlands various attempts at reformulation. We urge the intrigued reader to examine his suggestions in their original form. But a word about philosophical methodology is in order. Churchland is not attempting "conceptual analysis" in anything resembling its traditional philosophical sense and neither, typically, are neuro philosophers. (This is why a discussion of neuro philosophical reformulation fits with a discussion of EM.) There are philosophers who take the discipline's ideal to be a relatively simple set of necessary and sufficient conditions, expressed in non-technical natural language, governing the application of important concepts (like justice, knowledge, theory, or explanation). These analyses should square, to the extent possible, with pre-theoretical usage. Ideally, they should preserve synonymy. Other philosophers view this ideal as sterile, misguided, and perhaps deeply mistaken about the underlying structure of human knowledge. neuro philosophers tend to reside in the latter camp. Those who dislike philosophical speculation about the promise and potential of nascent science in an effort to reformulate ("reform-ulate") traditional philosophical concepts have probably already discovered that neurophilosophy is not for them. But the charge that neurocomputational reformulation of the sort Churchland attempts are "philosophically uninteresting" or "irrelevant" because they fail to provide "adequate analyses" of theory, explanation, and the like will get ignored among many contemporary philosophers, as well as their cognitive-scientific and neuroscientific friends. Before we leave the neuro philosophical applications of this theoretical development from recent cognitive/computational neuroscience, one more point of scientific detail is in order. The popularity of treating the neuron as the basic computational unit among neural modelers, as opposed to cognitive modelers, is declining rapidly. Compartmental modelling enables computational neuroscientists to mimic activity in and interactions between patches of neuronal membrane. This endorses modelers to control and manipulate a variety of subcellular factors that determine action potentials per time unit (including the topology of membrane structure in individual neurons, variations in ion channels across membrane patches, field properties of post-synaptic potentials depending on the location of the synapse on the dendrite or soma). Modelers can "custom-build" the neurons in their target circuitry without sacrificing the ability to study circuit properties of networks. For these reasons, few serious computational neuroscientists continue to work at a level that treats neurons as unstructured computational devices. But the above interpretative points still stand. With compartmental modelling, not only are simulated neural networks interpretable as vector-to-vector transformers. The neurons composing them are, too.

The Philosophy of science, and scientific epistemology are not the only area where philosophers have lately urged the relevance of neuroscientific discoveries. Kathleen Akins argues that a "traditional" view of the senses underlies the variety of sophisticated "naturalistic" programs about intentionality. Current neuroscientific understanding of the mechanisms and coding strategies implemented by sensory receptors shows that this traditional view is mistaken. The traditional view holds that sensory systems are "veridical" in at least three ways. (1) Each signal in the system correlates with a small range of properties in the external (to the body) environment. (2) The structure in the relevant relations between the external properties the receptors are sensitive to is preserved in the structure of the relations between the resulting sensory states. And (3) the sensory system reconstructively in faithfully, without fictive additions or embellishments, the external events. Using recent neurobiological discoveries about response properties of thermal receptors in the skin as an illustration, Akins shows that sensory systems are "narcissistic" rather than "veridical." All three traditional assumptions are violated. These neurobiological details and their philosophical implications open novel questions for the philosophy of perception and for the appropriate foundations for naturalistic projects about intentionality. Armed with the known neurophysiology of sensory receptors, for example, our "philosophy of perception" or of "perceptual intentionality" will no longer focus on the search for correlations between states of sensory systems and "veridically detected" external properties. This traditional philosophical (and scientific) project rests upon a mistaken "veridical" view of the senses. Neuroscientific knowledge of sensory receptor activity also shows that sensory experience does not serve the naturalist well as a "simple paradigm case" of an intentional relation between representation and world. Once again, available scientific detail shows the naivety of some traditional philosophical projects.

Focussing on the anatomy and physiology of the pain transmission system, Valerie Hardcastle (1997) urges a similar negative implication for a popular methodological assumption. Pain experiences have long been philosophers' favourite cases for analysis and theorizing about conscious experience generally. Nevertheless, every position about pain experiences has been defended recently: eliminativist, a variety of objectivists view, relational views, and subjectivist views. Why so little agreement, despite agreement that pain experience is the place to start an analysis or theory of consciousness? Hardcastle urges two answers. First, philosophers tend to be uninformed about the neuronal complexity of our pain transmission systems, and build their analyses or theories on the outcome of a single component of a multi-component system. Second, even those who understand some of the underlying neurobiology of pain tends to advocate gate-control theories. But the best existing gate-control theories are vague about the neural mechanisms of the gates. Hardcastle instead proposes a dissociable dual system of pain transmission, consisting of a pain sensory system closely analogous in its neurobiological implementation to other sensory systems, and a descending pain inhibitory system. She argues that this dual system is consistent with recent neuroscientific discoveries and accounts for all the pain phenomena that have tempted philosophers toward particular (but limited) theories of pain experience. The neurobiological uniqueness of the pain inhibitory system, contrasted with the mechanisms of other sensory modalities, renders pain processing atypical. In particular, the pain inhibitory system dissociates pains sensation from stimulation of nociceptors (pain receptors). Hardcastle concludes from the neurobiological uniqueness of pain transmission that pain experiences are atypical conscious events, and hence not a good place to start theorizing about or analysing the general type.

Developing and defending theories of content is a central topic in current philosophy of mind. A common desideratum in this debate is a theory of cognitive representation consistent with a physical or naturalistic ontology. We'll here describe a few contributions neuro philosophers have made to this literature.

When one perceives or remembers that he is out of coffee, his brain state possesses intentionality or "aboutness." The percept or memory is about one's being out of coffee, and it represents one for being out of coffee. The representational state has content. A psychosemantics seeks to explain what it is for a representational state to be about something: to provide an account of how states and events can have specific representational content. A physicalist psychosemantics seeks to do this using resources of the physical sciences exclusively. neuro philosophers have contributed to two types of physicalist psychosemantics: the Functional Role approach and the Informational approach.

The core claim of functional roles of semantics holds that a representation has its content in virtue of relations it bears to other representations. Its paradigm application is to concepts of truth-functional logic, like the conjunctive ‘and’ or disjunctive ‘or.’ A physical event instantiates the ‘and’ function just in case it maps two true inputs onto a single true output. Thus an expression bears the relations to others that give it the semantic content of ‘and.’ Proponents of functional role semantics propose similar analyses for the content of all representations (Form 1986). A physical event represents birds, for example, if it bears the right relations to events representing feathers and others representing beaks. By contrast, informational semantics associates content to a state depending upon the causal relations obtaining between the state and the object it represents. A physical state represents birds, for example, just in case an appropriate causal relation obtains between it and birds. At the heart of informational semantics is a causal account of information. Red spots on a face carry the information that one has measles because the red spots are caused by the measles virus. A common criticism of informational semantics holds that mere causal covariation is insufficient for representation, since information (in the causal sense) is by definition, always veridical while representations can misrepresent. A popular solution to this challenge invokes a teleological analysis of ‘function.’ A brain state represents X by virtue of having the function of carrying information about being caused by X (Dretske 1988). These two approaches do not exhaust the popular options for a psychosemantics, but are the ones to which neuro philosophers have contributed.

Paul Churchlands allegiance to functional role semantics goes back to his earliest views about the semantics of terms in a language. In his (1979) book, he insists that the semantic identity (content) of a term derive from its place in the network of sentences of the entire language. The functional economies envisioned by early functional role semanticists were networks with nodes corresponding to the objects and properties denoted by expressions in a language. Thus one node, appropriately connected, might represent birds, another feathers, and another beaks. Activation of one of these would tend to spread to the others. As ‘connectionist’ network modelling developed, alternatives arose to this one-representation-per-node ‘localist’ approach. By the time Churchland provided a neuroscientific elaboration of functional role semantics for cognitive representations generally, he too had abandoned the ‘localist’ interpretation. Instead, he offered a ‘state-space semantics’.

We saw in the section just above how (vector) state spaces provide a natural interpretation for activity patterns in neural networks (biological and artificial). A state-space semantics for cognitive representations is a species of functional role semantics because the individuation of a particular state depends upon the relations obtaining between it and other states. A representation is a point in an appropriate state space, and points (or sub volumes) in a space are individuated by their relations to other points (locations, geometrical proximity). Churchland illustrates a state-space semantics for neural states by appealing to sensory systems. One popular theory in sensory neuroscience of how the brain codes for sensory qualities (like Collor) are the opponent process account. Churchland describes a three-dimensional activation vector state-space in which all Collor perceivable by humans is represented as a point (or sub value). Each dimension corresponds to activity rates in one of three classes of photoreceptors present in the human retina and their efferent paths: The red-green opponent pathway, yellow-blue opponent pathway, and black-white (contrast) opponent pathway. Photons striking the retina are transduced by the receptors, producing an activity rate in each of the segregated pathways. The characterized Cellos have a triplet of activation frequency rates. Each dimension in that three-dimensional space will represent average frequency of action potentials in the axons of one class of ganglion cells projecting out of the retina. Face-to-face, the Collor perceivable by humans will be a region of that space. For example, an orange stimulus produces a relatively low level of activity in both the red-green and yellow-blue opponent pathways (x-axis and y-axis, respectively), and middle-range activity in the black-white (contrast) opponent pathways (z-axis). Pink stimuli, on the other hand, produce low activity in the red-green opponent pathway, middle-range activity in the yellow-blue opponent pathway, and high activity in the black-white (contrast) an opponent pathway. The location of each colour in the space generates a ‘colour solid.’ Location on the solid and geometrical proximity between regions reflect structural similarities between the perceived colours. Human gustatory representations are points in a four-dimensional state space, with each dimension coding for activity rates generated by gustatory stimuli in each type of taste receptor (sweet, salty, sour, bitter) and their segregated efferent pathways. When implemented in a neural network with structural and hence computational resources as vast as the human brain, the state space approach to psychosemantics generates a theory of content for a huge number of cognitive states.

Jerry Fodor and Ernest LePore raise an important challenge to Churchlands psychosemantics. Location in a state space alone seems insufficient to fix a state's representational content. Churchland never explains why a point in a three-dimensional state space represents the Collor, as opposed to any other quality, object, or event that varies along three dimensions. Churchlands account achieves its explanatory power by the interpretation imposed on the dimensions. Fodor and LePore allege that Churchland never specifies how a dimension comes to represent, e.g., degree of saltiness, as opposed to yellow-blue wavelength opposition. One obvious answer appeals to the stimuli that form the ‘external’ inputs to the neural network in question. Then, for example, the individuating conditions on neural representations of colours are that opponent processing neurons receive input from a specific class of photoreceptors. The latter in turn have electromagnetic radiation (of a specific portion of the visible spectrum) as their activating stimuli. However, this appeal to ‘external’ stimuli as the ultimate individuating conditions for representational content makes the resulting approach a version of informational semantics. Is this approach consonant with other neurobiological details?

The neurobiological paradigm for informational semantics is the feature detector: One or more neurons that are (I) maximally responsive to a particular type of stimulus, and (ii) have the function of indicating the presence of that stimulus type. Examples of such stimulus-types for visual feature detectors include high-contrast edges, motion direction, and colours. A favourite feature detector among philosophers is the alleged fly detector in the frog. Lettvin et al. (1959) identified cells in the frog retina that responded maximally to small shapes moving across the visual field. The idea that these cells' activity functioned to detect flies rested upon knowledge of the frogs' diet. Using experimental techniques ranging from single-cell recording to sophisticated functional imaging, neuroscientists have recently discovered a host of neurons that are maximally responsive to a variety of stimuli. However, establishing condition (ii) on a feature detector is much more difficult. Even some paradigm examples have been called into question. David Hubel and Torsten Wiesel's (1962) Nobel Prize winning work establishing the receptive fields of neurons in striate cortices are often interpreted as revealing cells whose function is edge detection. However, Lehky and Sejnowski (1988) have challenged this interpretation. They trained an artificial neural network to distinguish the three-dimensional shape and orientation of an object from its two-dimensional shading pattern. Their network incorporates many features of visual neurophysiology. Nodes in the trained network turned out to be maximally responsive to edge contrasts, but did not appear to have the function of edge detection.

Kathleen Akins (1996) offers a different neuro philosophical challenge to informational semantics and its affiliated feature-detection view of sensory representation. We saw in the previous section how Akins argues that the physiology of thermoreceptor violates three necessary conditions on ‘veridical’ representation. From this fact she draws doubts about looking for feature detecting neurons to ground a psychosemantics generally, including thought contents. Human thoughts about flies, for example, are sensitive to numerical distinctions between particular flies and the particular locations they can occupy. But the ends of frog nutrition are well served without a representational system sensitive to such ontological refinements. Whether a fly seen now is numerically identical to one seen a moment ago, need not, and perhaps cannot, figure into the frog's feature detection repertoire. Akins' critique casts doubt on whether details of sensory transduction will scale up to encompass of some adequately unified psychosemantics. It also raises new questions for human intentionality. How do we get from activity patterns in "narcissistic" sensory receptors, keyed not to "objective" environmental features but rather only to effects of the stimuli on the patch of tissue innervated, to the human ontology replete with enduring objects with stable configurations of properties and relations, types and their tokens (as the "fly-thought" example presented above reveals), and the rest? And how did the development of a stable, and rich ontology confer survival advantages to human ancestors?

Consciousness has reemerged as a topic in philosophy of mind and the cognitive and brain sciences over the past three decades. Instead of ignoring it, many physicalists now seek to explain it (Dennett, 1991). Here we focus exclusively on ways those neuroscientific discoveries have impacted philosophical debates about the nature of consciousness and its relation to physical mechanisms. Thomas Nagel argues that conscious experience is subjective, and thus permanently recalcitrant to objective scientific understanding. He invites us to ponder ‘what it is like to be a bat’ and urges the intuition that no amount of physical-scientific knowledge (including neuroscientific) supplies a complete answer. Nagel's intuition pump has generated extensive philosophical discussion. At least two well-known replies make direct appeal to neurophysiology. John Biro suggests that part of the intuition pumped by Nagel, that bat experience is substantially different from human experience, presupposes systematic relations between physiology and phenomenology. Kathleen Akins (1993a) delves deeper into existing knowledge of bat physiology and reports much that is pertinent to Nagel's question. She argues that many of the questions about bat subjectivity that we still consider open hinge on questions that remain unanswered about neuroscientific details. One example of the latter is the function of various cortical activity profiles in the active bat.

More recently philosopher David Chalmers (1996) has argued that any possible brain-process account of consciousness will leave open an ‘explanatory gap’ between the brain process and properties of the conscious experience. This is because no brain-process theory can answer the "hard" question: Why should that particular brain process give rise to conscious experience? We can always imagine ("conceive of") a universe populated by creatures having those brain processes but completely lacking conscious experience. A theory of consciousness requires an explanation of how and why some brain process causes consciousness replete with all the features we commonly experience. The fact that the hard question remains unanswered shows that we will probably never get a complete explanation of consciousness at the level of neural mechanisms. Paul and Patricia Churchland have recently offered the following diagnosis and reply. Chalmers offer a conceptual argument, based on our ability to imagine creatures possessing brains like ours but wholly lacking in conscious experience. But the more one learns about how the brain produces conscious experience-and literature is beginning to emerge (e.g., Gazzaniga, 1995) - the harder it becomes to imagine a universe consisting of creatures with brain processes like ours but lacking consciousness. This is not just to bare assertions. The Churchlands appeal to some neurobiological detail. For example, Paul Churchland (1995) develops a neuroscientific account of consciousness based on recurrent connections between thalamic nuclei (particularly "diffusely projecting" nuclei like the intralaminar nuclei) and the cortex. Churchland argues that the thalamocortical recurrency accounts for the selective features of consciousness, for the effects of short-term memory on conscious experience, for vivid dreaming during REM. (rapid-eye movement) sleep, and other "core" features of conscious experience. In other words, the Churchlands are claiming that when one learns about activity patterns in these recurrent circuits, one can't "imagine" or "conceive of" this activity occurring without these core features of conscious experience. (Other than just mouthing the words, "I am now imagining activity in these circuits without selective attention/the effects of short-term memory/vivid dreaming . . . ")

A second focus of sceptical arguments about a complete neuroscientific explanation of consciousness is sensory qualia: the introspectable qualitative aspects of sensory experience, the features by which subjects discern similarities and differences among their experiences. The colours of visual sensations are a philosopher's favourite example. One famous puzzle about colour qualia is the alleged conceivability of spectral inversions. Many philosophers claim that it is conceptually possible (if perhaps physically impossible) for two humans not to differ neurophysiological, while the Collor that fire engines and tomatoes appear to have to one subject is the Collor that grass and frogs appear to have to the other (and vice versa). A large amount of neuroscientifically-informed philosophy has addressed this question. A related area where neurophilosophical considerations have emerged concerns the metaphysics of colours themselves (rather than Collor experiences). A longstanding philosophical dispute is whether colours are objective property’s Existing external to perceiver or rather identifiable as or dependent upon minds or nervous systems. Some recent work on this problem begins with characteristics of Collor experiences: For example that Collor similarity judgments produce Collor orderings that align on a circle. With this resource, one can seek mappings of phenomenology onto environmental or physiological regularities. Identifying colours with particular frequencies of electromagnetic radiation does not preserve the structure of the hue circle, whereas identifying colours with activity in opponent processing neurons does. Such a tidbit is not decisive for the Collor objectivist-subjectivist debate, but it does convey the type of neurophilosophical work being done on traditional metaphysical issues beyond the philosophy of mind.

We saw in the discussion of Hardcastle (1997) two sections above that Neurophilosophers have entered disputes about the nature and methodological import of pain experiences. Two decades earlier, Dan Dennett (1978) took up the question of whether it is possible to build a computer that feels pain. He compares and notes pressure between neurophysiological discoveries and common sense intuitions about pain experience. He suspects that the incommensurability between scientific and common sense views is due to incoherence in the latter. His attitude is wait-and-see. But foreshadowing Churchland's reply to Chalmers, Dennett favours scientific investigations over conceivability-based philosophical arguments.

Neurological deficits have attracted philosophical interest. For thirty years philosophers have found implications for the unity of the self in experiments with commissurotomy patients. In carefully controlled experiments, commissurotomy patients display two dissociable seats of consciousness. Patricia Churchland scouts philosophical implications of a variety of neurological deficits. One deficit is blind-sight. Some patients with lesions to primary visual cortex report being unable to see items in regions of their visual fields, yet perform far better than chance in forced guess trials about stimuli in those regions. A variety of scientific and philosophical interpretations have been offered. Ned Form (1988) worries that many of these conflate distinct notions of consciousness. He labels these notions ‘phenomenal consciousness’ (‘P-consciousness’) and ‘access consciousness’ (‘A-consciousness’). The former is that which, ‘what it is like-ness of experience. The latter is the availability of representational content to self-initiated action and speech. Form argues that P-consciousness is not always representational whereas A-consciousness is. Dennett and Michael Tye are sceptical of non-representational analyses of consciousness in general. They provide accounts of blind-sight that do not depend on Form's distinction.

Many other topics are worth neurophilosophical pursuit. We mentioned commissurotomy and the unity of consciousness and the self, which continues to generate discussion. Qualia beyond those of Collor and pain have begun to attract neurophilosophical attention has self-consciousness. The first issues to arise in the ‘philosophy of neuroscience’ (before there was a recognized area) was the localization of cognitive functions to specific neural regions. Although the ‘localization’ approach had dubious origins in the phrenology of Gall and Spurzheim, and was challenged severely by Flourens throughout the early nineteenth century, it reemerged in the study of aphasia by Bouillaud, Auburtin, Broca, and Wernicke. These neurologists made careful studies (where possible) of linguistic deficits in their aphasic patients followed by brain autopsies postmortem. Broca's initial study of twenty-two patients in the mid-nineteenth century confirmed that damage to the left cortical hemisphere was predominant, and that damage to the second and third frontal convolutions was necessary to produce speech production deficits. Although the anatomical coordinates’ Broca postulates for the ‘speech production centres do not correlate exactly with damage producing production deficits, both are that in this area of frontal cortex and speech production deficits still bear his name (‘Broca's area’ and ‘Broca's aphasia’). Less than two decades later Carl Wernicke published evidence for a second language centre. This area is anatomically distinct from Broca's area, and damage to it produced a very different set of aphasic symptoms. The cortical area that still bears his name (‘Wernicke's area’) is located around the first and second convolutions in temporal cortex, and the aphasia that bears his name (‘Wernicke's aphasia’) involves deficits in language comprehension. Wernicke's method, like Broca's, was based on lesion studies: a careful evaluation of the behavioural deficits followed by post mortem examination to find the sites of tissue damage and atrophy. Lesion studies suggesting more precise localization of specific linguistic functions remain a cornerstone to this day in aphasic research

Lesion studies have also produced evidence for the localization of other cognitive functions: for example, sensory processing and certain types of learning and memory. However, localization arguments for these other functions invariably include studies using animal models. With an animal model, one can perform careful behavioural measures in highly controlled settings, then ablate specific areas of neural tissue (or use a variety of other techniques to Form or enhance activity in these areas) and remeasure performance on the same behavioural tests. But since we lack an animal model for (human) language production and comprehension, this additional evidence isn't available to the neurologist or neurolinguist. This fact makes the study of language a paradigm case for evaluating the logic of the lesion/deficit method of inferring functional localization. Philosopher Barbara Von Eckardt (1978) attempts to make explicit the steps of reasoning involved in this common and historically important method. Her analysis begins with Robert Cummins' early analysis of functional explanation, but she extends it into a notion of structurally adequate functional analysis. These analyses break down a complex capacity C into its constituent capacity’s c1, c2, . . . cn, where the constituent capacities are consistent with the underlying structural details of the system. For example, human speech production (complex capacity C) results from formulating a speech intention, then selecting appropriate linguistic representations to capture the content of the speech intention, then formulating the motor commands to produce the appropriate sounds, then communicating these motor commands to the appropriate motor pathways (constituent capacity’s c1, c2, . . . , cn). A functional-localization hypothesis has the form: Brain structure S in an organism (type) O has constituent capacity ci, where ci is a function of some part of O. An example, Brains Broca's area (S) in humans (O) formulates motor commands to produce the appropriate sounds (one of the constituent capacities ci). Such hypotheses specify aspects of the structural realization of a functional-component model. They are part of the theory of the neural realization of the functional model.

Armed with these characterizations, Von Eckardt argues that inference to a functional-localization hypothesis proceeds in two steps. First, a functional deficit in a patient is hypothesized based on the abnormal behaviour the patient exhibits. Second, localization of function in normal brains is inferred on the basis of the functional deficit hypothesis plus the evidence about the site of brain damage. The structurally-adequate functional analysis of the capacity connects the pathological behaviour to the hypothesized functional deficit. This connection suggests four adequacy conditions on a functional deficit hypothesis. First, the pathological behaviour P (e.g., the speech deficits characteristic of Broca's aphasia) must result from failing to exercise some complex capacity C (human speech production). Second, there must be a structurally-adequate functional analysis of how people exercise capacity C that involves some constituent capacity ci (formulating motor commands to produce the appropriate sounds). Third, the operation of the steps described by the structurally-adequate functional analysis minus the operation of the component performing ci (Broca's area) must result in pathological behaviour P. Fourth, there must not be a better available explanation for why the patient does P. Arguments to a functional deficit hypothesis on the basis of pathological behaviour is thus an instance of argument to the best available explanation. When postulating a deficit in a normal functional component provides the best available explanation of the pathological data, we are justified in drawing the inference.

Von Eckardt applies this analysis to a neurological case study involving a controversial reinterpretation of agnosia. Her philosophical explication of this important neurological method reveals that most challenges to localization arguments of whether to argue only against the localization of a particular type of functional capacity or against generalizing from localization of function in one individual to all normal individuals. (She presents examples of each from the neurological literature.) Such challenges do not impugn the validity of standard arguments for functional localization from deficits. It does not follow that such arguments are unproblematic. But they face difficult factual and methodological problems, not logical ones. Furthermore, the analysis of these arguments as involving a type of functional analysis and inference to the best available explanation carries an important implication for the biological study of cognitive function. Functional analyses require functional theories, and structurally adequate functional analyses require checks imposed by the lower level sciences investigating the underlying physical mechanisms. Arguments to best available explanation are often hampered by a lack of theoretical imagination: the available explanations are often severely limited. We must seek theoretical inspiration from any level of theory and explanation. Hence making explicit the ‘logic’ of this common and historically important form of neurological explanation reveals the necessity of joint participation from all scientific levels, from cognitive psychology down to molecular neuroscience. Von Eckardt anticipated what came to be heralded as the ‘co-evolutionary research methodology,’ which remains a centerpiece of neurophilosophy to the present day.

Over the last two decades, evidence for localization of cognitive function has come increasingly from a new source: the development and refinement of neuroimaging techniques. The form of localization-of-function argument appears not to have changed from that employing lesion studies (as analysed by Von Eckardt). Instead, these imaging technologies resolve some of the methodological problems that plage lesion studies. For example, researchers do not need to wait until the patient dies, and in the meantime probably acquires additional brain damage, to find the lesion sites. Two functional imaging techniques are prominent: Positron emission tomography, or PET, and functional magnetic resonance imaging, or MRI. Although these measure different biological markers of functional activity, both now have a resolution down to around 1mm. As these techniques increase spatial and temporal resolution of functional markers and continue to be used with sophisticated behavioural methodologies, the possibility of localizing specific psychological functions to increasingly specific neural regions continues to grow

What we now know about the cellular and molecular mechanisms of neural conductance and transmission is spectacular. The same evaluation holds for all levels of explanation and theory about the mind/brain: maps, networks, systems, and behaviour. This is a natural outcome of increasing scientific specialization. We develop the technology, the experimental techniques, and the theoretical frameworks within specific disciplines to push forward our understanding. Still, a crucial aspect of the total picture gets neglected: the relationship between the levels, the ‘glue’ that binds knowledge of neuron activity to subcellular and molecular mechanisms, network activity patterns to the activity of and connectivity between single neurons, and behaviour to network activity. This problem is especially glaring when we focus on the relationship between ‘cognitivist’ psychological theories, postulating information-bearing representations and processes operating over their contents, and the activity patterns in networks of neurons. Co-evolution between explanatory levels still seems more like a distant dream rather than an operative methodology.

It is here that some neuroscientists appeal to ‘computational’ methods. If we examine the way that computational models function in more developed sciences (like physics), we find the resources of dynamical systems constantly employed. Global effects (such as large-scale meteorological patterns) are explained in terms of the interaction of ‘local’ lower-level physical phenomena, but only by dynamical, nonlinear, and often chaotic sequences and combinations. Addressing the interlocking levels of theory and explanation in the mind/brain using computational resources that have worked to bridge levels in more mature sciences might yield comparable results. This methodology is necessarily interdisciplinary, drawing on resources and researchers from a variety of levels, including higher levels like experimental psychology, ‘program-writing’ and ‘connectionist’ artificial intelligence, and philosophy of science.

However, the use of computational methods in neuroscience is not new. Hodgkin, Huxley, and Katz incorporated values of voltage-dependent potassium conductance they had measured experimentally in the squid giant axon into an equation from physics describing the time evolution of a first-order kinetic process. This equation enabled them to calculate best-fit curves for modelled conductance versus time data that reproduced the S-shaped (sigmoidal) function suggested by their experimental data. Using equations borrowed from physics, Rall (1959) developed the cable model of dendrites. This theory provided an account of how the various inputs from across the dendritic tree interact temporally and spatially to determine the input-output properties of single neurons. It remains influential today, and has been incorporated into the genesis software for programming neurally realistic networks. More recently, David Sparks and his colleagues have shown that a vector-averaging model of activity in neurons of superior colliculi correctly predicts experimental results about the amplitude and direction of saccadic eye movements. Working with a more sophisticated mathematical model, Apostolos Georgopoulos and his colleagues have predicted direction and amplitude of hand and arm movements based on averaged activity of 224 cells in motor cortices. Their predictions have borne out under a variety of experimental tests. We mention these particular studies only because we are familiar with them. We could multiply examples of the fruitful interaction of computational and experimental methods in neuroscience easily by one-hundred-fold. Many of these extend back before ‘computational neuroscience’ was a recognized research endeavour.

We've already seen one example, the vector transformation account, of neural representation and computation, under active development in cognitive neuroscience. Other approaches using ‘cognitivist’ resources are also being pursued. Many of these projects draw upon ‘cognitivist’ characterizations of the phenomena to be explained. Many exploit ‘cognitivist’ experimental techniques and methodologies. Some even attempt to derive ‘cognitivist’ explanations from cell-biological processes (e.g., Hawkins and Kandel 1984). As Stephen Kosslyn puts it, cognitive neuroscientists employ the ‘information processing’ view of the mind characteristic of cognitivism without trying to separate it from theories of brain mechanisms. Such an endeavour calls for an interdisciplinary community willing to communicate the relevant portions of the mountain of detail gathered in individual disciplines with interested nonspecialists: not just people willing to confer with those working at related levels, but researchers trained in the methods and factual details of a variety of levels. This is a daunting requirement, but it does offer some hope for philosophers wishing to contribute to future neuroscience. Thinkers trained in both the ‘synoptic vision’ afforded by philosophy and the factual and experimental basis of genuine graduate-level science would be ideally equipped for this task. Recognition of this potential niche has been slow among graduate programs in philosophy, but there is some hope that a few programs are taking steps to fill it.

In the final analysis there will be philosophers unprepared to accept that, if a given cognitive capacity is psychologically real, then there must be an explanation of how it is possible for an individual in the course of human development to acquire that cognitive capacity, or anything like it, can have a role to play in philosophical accounts of concepts and conceptual abilities. The most obvious basis for such a view would be a Frégean distrust of “psychology” that leads to a rigid division of labour between philosophy and psychology. The operative thought is that the task of a philosophical theory of concepts is to explain what a given concept is or what a given conceptual ability consist in. This, it is frequently maintained, is something that can be done in complete independence of explaining how such a concept or ability might be acquired. The underlying distinction is one between philosophical questions centring around concept possession and psychological questions centring around concept possibilities for an individual to acquire that ability, then it cannot be psychologically real. Nevertheless, this distinction is, however, strictly one does adhere to the distinction, it provides no support for a rejection of any given cognitive capacity for which is psychologically real. The neo-Frégean distinction is directly against the view that facts about how concepts are acquired have a role to play in explaining and individualizing concepts. But this view does not have to be disputed by a supporter as such, nonetheless, all that the supporter is to commit is that the principle that no satisfactory account of what a concept is should make it impossible to provide explanation of how that concept can be acquired. That is, that this principle has nothing to say about the further question of whether the psychological explanation has a role to play in a constitutive explanation of the concept, and hence is not in conflict with the neo-Frégean distinction.

The world-view, whereby modernity is to assume that communion with the essences of physical reality and associated theories was possible, but it made no other provisions for the knowing mind. In that, the totality from which modern theory contributes to a view of the universe as an unbroken, undissectible, and undivided dynamic whole. Even so, a complicated tissue of event, in which connections of different kinds alternate or overlay or combine and in such a way determine the texture of the whole. Errol Harris noted in thinking about the special character of wholeness in modern epistemology, a unity with internal content is a blank or empty set and is not recognized as a whole. A collection of merely externally related parts does not constitute a whole in that the parts will not be “mutually adaptive and complementary to one another.”

Wholeness requires a complementary relationship between unity and difference and is governed by a principle of organization determining the interrelationship between parts. This organizing principle must be universal to a genuine whole and implicit in all parts that constitute the whole, even though the whole is exemplified in its parts. This principle of order, “is nothing real in and of itself. It is the way of the parts are organized, and not another consistent additional to those that constitute the totality.”

In a genuine whole, the relationships between the constituent parts must be “internal or immanent” in the parts, as opposed to a more spurious whole in which parts appear to disclose wholeness due to relationships that are external to the parts. The collections of parts that would allegedly constitute the whole in both subjective theory and physical reality are each examples of the spurious whole. Parts constitute a genuine whole when the universal principle of order is inside the parts and thereby adjusts each to all that they interlock and become mutually binding. All the same, it is also consistent with the manner in which we have begun to understand the relation between parts and whole in modern biology.

Much of the ambiguity to explain the character of wholes in both physical reality and biology derives from the assumption that order exists between or outside parts. But order complementary relationships between difference and sameness in any physical reality as forwarded through physical events is never external to that event - the connections are immanent in the event. From this perspective, the addition of non-locality to this picture of the dynamic whole is not surprising. The relationship between part, as quantum events apparent in observation or measurement, and the undissectible whole: Having revealed but not described by the instantaneous correlations between measurements in space-like separated regions, is another extension of the part-whole complementarity in modern physical reality.

If the universe is a seamlessly interactive system that evolves to higher levels of complexity and if the lawful regularises of this universe are emergent properties of this system, we can assume that the cosmos is a single significant whole that evinces progressive order in complementary relations to its parts. Given that this whole exists in some sense within all parts, one can then argue that it operates in self-reflective fashions and is the ground for all emergent complexity. Since, human consciousness evinces self-reflective awareness in the human brain and since this brain, like all physical phenomena, can be viewed as an emergent property of the whole, it is not unreasonable to conclude, in philosophical terms at least, that the universe is conscious.

But since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite literally, beyond all human representations or descriptions. If one chooses to believe that the universe is a self-reflective and self-organizing whole, this lends no support whatsoever to conceptions of design, meaning, purpose, intent, or plan associated with mytho-religious or cultural heritage. However, if one does not accept this view of the universe, there is nothing in the scientific description of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as the foundation to religious experience, can be dismissed, undermined, or invalidate with appeals to scientific knowledge.

A full account of the structure of consciousness, will need to illustrate those higher, conceptual forms of consciousness to which little attention on such an account will take and about how it might emerge from given points of value, is the thought that an explanation of everything that is distinctive about consciousness will emerge out of an account of what it is for a subject to be capable of thinking about himself. But, to a proper understanding of the complex phenomenon of consciousness. There are no facts about linguistic mastery that will determine or explain what might be termed the cognitive dynamics that are individual processes that have found their way forward for a theory of consciousness, it sees, to chart the characteristic features individualizing the various distinct conceptual forms of consciousness in a way that will provide a taxonomy of unconsciousness they to will show in what way the manifesting characterlogical functions that can to determine at the level of content. What so be, our promising images of hope, accomplishes the responsibilities that these delegated forms of higher forms of consciousness emerge from a rich foundation of non-conceptual representations of thought, which can only expose and clarify their conviction that these forms of conscious thought hold the key, not just to an eventful account of how mastery of the conscious paradigms, but to a proper understanding of the plexuity of self-consciousness and/or the overall conjecture of consciousness that stands alone as to an everlasting vanquishment into the ever unchangeless state of unconsciousness, and its abysses are only held by incestuousness.

Analytic and Linguistic Philosophy, are bi-products of the 20th-century, which had dominated both Britain and the United States since World War II, particularly of aiming to clarify language and analyse the concepts expressed in it. The movement has been given a variety of designations, including linguistic analysis, logical empiricism, logical positivism, Cambridge analysis, and 'Oxford philosophy'. The last two labels are derived from the universities in England where this philosophical method has been particularly influential. Although no specific doctrines or tenets are accepted by the movement as a whole, analytic and linguistic philosophers agree that the proper activity of philosophy is clarifying language, or, as some prefer, clarifying concepts. The aim of this activity is to settle philosophical disputes and resolve philosophical problems, which, it is argued, originates in linguistic confusion.

A considerable diversity of views exists among analytic and linguistic philosophers regarding the nature of conceptual or linguistic analysis. Some have been primarily concerned with clarifying the meaning of specific words or phrases as an essential step in making philosophical assertions clear and unambiguous. Others have been more concerned with determining the general conditions that must be met for any linguistic utterance to be meaningful; their intent is to establish a criterion that will distinguish between meaningful and nonsensical sentences. Still other analysts have been interested in creating formal, symbolic languages that are mathematical in nature. Their claim is that philosophical problems can be more effectively dealt with once they are formulated in a rigorous logical language.

By contrast, many philosophers associated with the movement have focussed on the analysis of ordinary, or natural, language. Difficulties arise when concepts such as time and freedom, for example, are considered apart from the linguistic context in which they normally appear. Attention to language as it is ordinarily used as the key, it is argued, to resolving many philosophical puzzles.

Linguistic analysis as a method of philosophy is as old as the Greeks. Several of the dialogues of Plato, for example, are specifically concerned with clarifying terms and concepts. Nevertheless, this style of philosophizing has received dramatically renewed emphasis in the 20th century. Influenced by the earlier British empirical tradition of John Locke, George Berkeley, David Hume, and John Stuart Mill and by the writings of the German mathematician and philosopher Gottlob Frége, the 20th-century English philosopher's G. E. Moore and Bertrand Russell became the founders of this contemporary analytic and linguistic trend. As students together at the University of Cambridge, Moore and Russell rejected Hegelian idealism, particularly as it was reflected in the work of the English metaphysician F. H. Bradley, who held that nothing is completely real except the Absolute. In their opposition to idealism and in their commitment to the view that careful attention to language is crucial in philosophical inquiry. They set the mood and style of philosophizing for much of the 20th century English-speaking world.

For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating fewer puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as "time is unreal," analyses that then aided in the determining of the truth of such assertions.

Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called atomic propositions. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical views based on this logical analysis of language, and the insistence that meaningful propositions must correspond to facts constitute what Russell called logical atomism. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements 'John is good' and 'John is tall' have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property 'goodness' as if it were a characteristic of John in the same way that the property 'tallness' is a characteristic of John. Such failure results in philosophical confusion.

Russell's work in mathematics attracted by adherent correspondences what to Cambridge the Austrian philosopher Ludwig Wittgenstein, became a central figure in the analytic and linguistic movement. In his first major work, “Tractatus Logico-Philosophicus” (1921, translations, 1922), in which he first presented his theory of language, Wittgenstein argued that 'all philosophy is a 'critique of language' and that 'philosophy aims at the logical clarification of thoughts'. The results of Wittgenstein's analysis resembled Russell's logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts - the propositions of science-are considered factually meaningful. Metaphysical, theological, and ethical sentences were judged to be factually meaningless.

Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920s initiated the movement known as logical positivism. Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivists, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).

The positivists divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depend altogether on the meanings of the terms constituting the statement. An example would be the proposition "two plus two equals four." The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivists concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually overflowing emptiness. The ideas of logical positivism were made popular in England by the publication of A.J. Ayer's Language, Truth and Logic” in 1936.

The positivists' verifiability theory of meaning came under intense criticism by philosophers such as the Austrian-born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the Tractatus, he initiated a new line of thought culminating in his posthumously published Philosophical Investigations (1953: Translations, 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.

This recognition led to Wittgenstein's influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.

Additional contributions within the analytic and linguistic movement include the work of the British philosopher's Gilbert Ryle, John Austin, and P. F. Strawson and the American philosopher W. V. Quine. According to Ryle, the task of philosophy is to restate 'systematically misleading expressions' in forms that are logically more accurate. He was particularly concerned with statements the grammatical form of which suggests the existence of nonexistent objects. For example, Ryle is best known for his analysis of an aptitude language sustained of the mental act, language that misleadingly suggests that the mind is an entity in the same way as the body.

Austin maintained that one of the most fruitful starting points for philosophical inquiry is attention to the extremely fine distinctions drawn in ordinary language. His analysis of language eventually led to a general theory of speech acts, that is, to a description of the variety of activities that an individual may be performing when something is uttered.

Strawson is known for his analysis of the relationship between formal logic and ordinary language. The complexity of the latter, he argued, is inadequately represented by formal logic. A variety of analytic tools, therefore, are needed in addition to logic in analysing ordinary language.

Quine discussed the relationship between language and ontology. He argued that language systems tend to commit their users to the existence of certain things. For Quine, the justification for speaking one way rather than another is a thoroughly pragmatic one.

The commitment to language analysis as a way of pursuing philosophy has continued as a significant contemporary dimension in philosophy. A division also continues to exist between those who prefer to work with the precision and rigour of symbolic logical systems and those who prefer to analyse ordinary language. Although few contemporary philosophers maintain that all philosophical problems are linguistic, the view continues to be widely held that attention to the logical structure of language and to how language is used in everyday discourse in resolving philosophical problems. The examination of one's own thought and feeling, is the basis of a man much given to introspection, as a sense of self-searching is a limited, definite or measurable extent of time during which something exists, that its condition is reserved in the term of having or showing skill in thinking or reasoning, the Rationale is marked by the reasonable logical calculus and is also called a formal language, and a logical system? A system in which explicit rules are provided to determining (1) which are the expressions of the system (2) which sequence of expressions count as well formed (well-forced formulae) (3) which sequence would count as proofs. An indefinable system that may include axioms for which leaves terminate a proof, however, it shows of the prepositional calculus and the predicated calculus.

It's most immediate of issues surrounding certainty are especially connected with those concerning 'scepticism'. Although Greek scepticism entered on the value of enquiry and questioning, scepticism is now the denial that knowledge or even rational belief is possible, either about some specific subject-matter, e.g., ethics, or in any area whatsoever. Classical scepticism, springs from the observation that the best method in some area seems to fall short of giving us contact with the truth, e.g., there is a gulf between appearances and reality, it frequently cites the conflicting judgements that our methods deliver, with the effectually expressed doubt about truth becoming narrowly spaced that in turn demonstrates their marginality, in at least, ascribed of being indefinable. In classic thought the various examples of this conflict were systemized in the tropes of Aenesidemus. So that, the scepticism of Pyrrho and the new Academy was a system of argument and inasmuch as opposing dogmatism, and, particularly the philosophical system building of the Stoics.

In the writings of Sextus Empiricus, its method was typically to cite reasons for finding our issue undesirable (sceptics devoted particular energy to undermining the Stoics conception of some truths as delivered by direct apprehension or some katalepsis). As a result the sceptics conclude eposhé, or the suspension of belief, and then go on to celebrate a way of life whose object was ataraxia, or the tranquillity resulting from suspension of belief.

Fixed by for and of itself, the mere mitigated scepticism which accepts every day or commonsense belief, is that, not the delivery of reason, but as due more to custom and habit. Nonetheless, it is self-satisfied at the proper time, however, the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by the accentuations from Pyrrho through to Sextus Expiricus. Despite the fact that the phrase 'Cartesian scepticism' is sometimes used, Descartes himself was not a sceptic, however, in the 'method of doubt' uses a sceptical scenario in order to begin the process of finding a general distinction to mark its point of knowledge. Descartes trusts in categories of 'clear and distinct' ideas, not far removed from the phantasiá kataleptikê of the Stoics.

For many sceptics had traditionally held that knowledge requires certainty, artistry. And, of course, they claim that certainty of knowledge is not possible. In part, nonetheless, of the principle that every effect it's a consequence of an antecedent cause or causes. For causality to be true it is not necessary for an effect to be predictable as the antecedent causes may be numerous, too complicated, or too interrelated for analysis. Nevertheless, in order to avoid scepticism, this participating sceptic has generally held that knowledge does not require certainty. Except for alleged cases of things that are evident for one just by being true, it has often been thought, that any thing known must satisfy certain criteria as well for being true. It is often taught that anything is known must satisfy certain standards. In so saying, that by 'deduction' or 'induction', there will be criteria specifying when it is. As these alleged cases of self-evident truths, the general principle specifying the sort of consideration that will make such standards in the apparent or justly conclude in accepting it warranted to some degree.

Besides, there is another view - the absolute globular view that we do not have any knowledge whatsoever. In whatever manner, it is doubtful that any philosopher seriously entertains of absolute scepticism. Even the Pyrrhonist sceptics, who held that we should refrain from accenting to any non-evident standards that no such hesitancy about asserting to 'the evident', the non-evident are any belief that requires evidences because it is warranted.

René Descartes (1596-1650), in his sceptical guise, never doubted the content of his own ideas. It's challenging logic, inasmuch as of whether they 'corresponded' to anything beyond ideas.

Even so, Pyrrhonism and Cartesian form of virtual globular scepticism, in having been held and defended, that of assuming that knowledge is some form of true, sufficiently warranted belief, it is the warranted condition that provides the truth or belief conditions, in that of providing the grist for developing upon the sceptic's undertaking. The Pyrrhonist will suggest that there are no non-evident, empirically deferring the sufficiency of giving in but warranted. Whereas, a Cartesian sceptic will agree that no empirical standards have placed anything other than one's own mind and its contentual subjective matters for which are sufficiently warranted, because there are always legitimate grounds for doubting it. Whereunto, the essential differences between the two views concern the stringency of the requirements for a belief being sufficiently warranted justly, to take account of as knowledge.

James, (1842-1910), although with characteristic generosity exaggerated in his debt to Charles S. Peirce (1839-1914), he charted that the method of doubt encouraged people to pretend to doubt what they did not doubt in their hearts, and criticize its individualist's insistence, that the ultimate test of certainty is to be found in the individuals personalized consciousness.

From his earliest writings, James understood cognitive processes in teleological terms. 'Thought', he held, assists us in the satisfactory interests. His will to believe doctrine, the view that we are sometimes justified in believing beyond the evidential relics upon the notion that a belief's benefits are relevant to its justification. His pragmatic method of analysing philosophical problems, for which requires that we find the meaning of terms by examining their application to objects in experimental situations, similarly reflects the teleological approach in its attention to consequences.

Such an approach, however, sets' James' theory of meaning apart from verification, dismissive of metaphysics. Unlike the verificationalist, who takes cognitive meaning to be a matter only of consequences in sensory experience? James' took pragmatic meaning to include emotional and matter responses. Moreover his, metaphysical standard of quality value, not a way of dismissing them as meaningless. It should also be noted that in a greater extent, circumspective moments' James did not hold that even his broad set of consequences was exhaustive of a term meaning. 'Theism', for example, he took to have antecedently, definitional meaning, in addition to its varying degree of importance and chance upon an important pragmatic meaning.

James' theory of truth reflects upon his teleological conception of cognition, by considering a true belief to be one which is compatible with our existing system of beliefs, and leads us to satisfactory interaction with the world.

However, Peirce's famous pragmatist principle is a rule of logic employed in clarifying our concepts and ideas. Consider the claim the liquid in a flask is an acid, if, we believe this, we except that it would turn red: We accept an action of ours to have certain experimental results. The pragmatic principle holds that listing the conditional expectations of this kind, in that we associate such immediacy with applications of a conceptual representation that provides a complete and orderly set clarification of the concept. This is irrelevant to the logic of abduction: Clarificationists using the pragmatic principle provides all the information about the content of a hypothesis that is relevantly to decide whether it is worth testing.

To a greater extent, and what is most important, is the framed apprehension of the pragmatic principle, in so that, Pierces's account of reality: When we take something to be real that by this single case, we think it is 'fated to be agreed upon by all who investigate' the matter to which it stand, in other words, if I believe that it is really the case that 'P', then I except that if anyone were to inquire into the finding measures into whether 'p', they would arrive at the belief that 'p'. It is not part of the theory that the experimental consequences of our actions should be specified by a warranted empiricist vocabulary - Peirce insisted that perceptual theories are abounding in latency. Even so, nor is it his view that the collected conditionals do or not clarify a concept as all analytic. In addition, in later writings, he argues that the pragmatic principle could only be made plausible to someone who accepted its metaphysical realism: It requires that 'would-bees' are objective and, of course, real.

If realism itself can be given a fairly quick clarification, it is more difficult to chart the various forms of supposition, for they seem legendary. Other opponents deny that entitles posited by the relevant discourses that exist or at least exists: The standard example is 'idealism' that reality is somehow mind curative or mind-coordinated that substantially real objects consist of the 'external world' through which is nothing but independently of eloping minds, but only exist as in some way correlative to the mental operations. The doctrine assembled of 'idealism' enters on the conceptual note that reality as we understand this as meaningful and reflects the working of mindful purposes. And it construes this as meaning that the inquiring mind itself makes of some formative constellations and not of any mere understanding of the nature of the 'real' bit even the resulting charger we attribute to it.

Wherefore, the term ids most straightforwardly used when qualifying another linguistic form of Grammatik: a real 'x' may be contrasted with a fake, a failed 'x', a near 'x', and so on. To treat something as real, without qualification, is to suppose it to be part of the actualized world. To reify something is to suppose that we have committed by some indoctrinated treatise, as that of a theory. The central error in thinking of reality and the totality of existence is to think of the 'unreal' as a separate domain of things, perhaps, unfairly to that of the benefits of existence.

Such being previously characterized or specified, or authorized to siege using ways through some extreme degree or quality in as much as having had never before, is that non-existence of all things. To set before the mind for consideration, to forward the literary products of the Age of Reason, something produced was labouriously implicated. Nevertheless, the product of logical thinking or reasoning the argument confusion which things are out of their normal or proper places or relationships, as a non-contributive conduct deranged methodologically and disorganization instead of a 'quantifier'. (Stating informally as a quantifier is an expression that reports of a quantity of times that a predicate is satisfied in some class of things, i.e., in a domain.) This confusion leads the unsuspecting to think that a sentence such as 'Nothing is all around us' talks of a special kind of thing that is all around us, when in fact it merely denies that the predicate 'is all around us' have appreciations. The feelings that led some philosophers and theologians, notably Heidegger, to talk of the experience of a quality or state of being as un-quantified as of Nothing, in that nothing as something that does not exist was it not his hopes that a worthless account is the quality or state of being that which something has come. This is not properly the experience of anything, but rather the failure of a hope or expectations that there would be something of some kind at some point. This may arise in quite everyday cases, as when one finds that the article of functions one expected to see as usual, in the corner has disappeared. The difference between 'existentialist'' and 'analytic philosophy', on the point of what, whereas the former is afraid of nothing, and the latter think that there is nothing to be afraid of.

A rather different set of concerns arises when actions are specified in terms of doing nothing, saying nothing may be an admission of guilt, and doing nothing in some circumstances may be tantamount to murder. Still, other substantiated problems arise over conceptualizing empty space and time.

Whereas, the standard opposition between those who affirm and those who deny, the real existence of some kind of thing or some kind of fact or state of affairs. Almost any area of discourse may be the focus of this dispute: The external world, the past and future, other minds, mathematical objects, possibilities, universals, moral or aesthetic properties are examples. There be to one influential suggestion, as associated with the British philosopher of logic and language, and the most determinative of philosophers centred round Anthony Dummett (1925), to which is borrowed from the 'intuitivistic' examination of classical mathematics, and suggested that the unrestricted use of the 'principle of bivalence' is the trademark of 'realism'. However, this ha to overcome counter-examples both ways: Although Aquinas wads a moral 'realist', he held that moral really was not sufficiently structured to make true or false every moral claim. Unlike Kant who believed that he could use the law of bivalence happily in mathematics, precisely because it was only our own construction. Realism can itself be subdivided: Kant, for example, combines empirical realism (within the phenomenal world the realist says the right things - surrounding objects that really exist and is independent of us but are so of our mental states) with transcendental idealism (the phenomenal world as whole reflects the structures imposed on it by the activity of our minds as they render it intelligible to us). In modern philosophy the orthodox oppositions to realism have been from a philosopher such as Goodman, who, impressed by the extent to which we perceive the world through conceptual and linguistic lenses of our own making.

Assigned to the modern treatment of existence in the theory of 'quantification' is sometimes put by saying that existence is not a predicate. The idea is that the existential quantify themselves and add an operator onto the predicate, indicating that the property it expresses has instances. Existence is therefore treated as a second-order property, or a property of properties. It is fitting to say, that in this it is like number, for when we say that these things of a kind, we do not describe the thing (ad we would if we said there are red things of the kind), but instead attribute a property to the kind itself. The parallelled numbers are exploited by the German mathematician and philosopher of mathematics Gottlob Frége in the dictum that affirmation of existence is merely denied of the number nought. A problem, nevertheless, proves accountable for it's crated by sentences like 'This exists', where some particular thing is undirected, such that a sentence seems to express a contingent truth (for this insight has not existed), yet no other predicate is involved. 'This exists' is that unlike 'Tamed tigers exist', where a property is said to have an instance, for the word 'this' and is not unearthed as a property, but exclusively characterized by the peculiarity of individuality, for being distinctively identified in the likeness of human beings.

Possible worlds seem able to differ from each other purely in the presence or absence of individuals, and not merely in the distribution of exemplification of properties.

The philosophical ponderosity over which to set upon the unreal, as belonging to the domain of being. Nonetheless, there is little for us that can be said with the philosopher's study. So it is not apparent that there can be such a subject for being by it. Nevertheless, the concept had a central place in philosophy from Parmenides to Heidegger. The essential question of 'why is there something and not of nothing'? Prompting over logical reflection on what it is for a universal to have an instance, and as long history of attempts to explain contingent existence, by which id to reference and a necessary ground.

In the transition, ever since Plato, this ground becomes a self-sufficient, perfect, unchanging, and external something, identified with the Good or that of God, but whose relation with the everyday world, remains obscure. The celebrated argument for the existence of God first proposed by Anselm in his Proslogin. The argument by defining God as 'something than which nothing greater can be conceived'. God then exists in the understanding since we understand this concept. However, if he only existed in the understanding something greater could be conceived, for a being that exists in reality is greater than one that exists in the understanding. But, then, we can conceivably have something greater than that which nothing greater can be conceived, which is antithetically, therefore, God cannot exist on the understanding, but exists in reality.

An influential argument (or family of arguments) for the existence of God, finding its premises are that all natural things are dependent for their existence on something else. The totality of dependent brings must then it depends upon a non-dependent, or necessarily existent bring of which is God. Like the argument to design, the cosmological argument was attacked by the Scottish philosopher and historian David Hume (1711-76) and Immanuel Kant.

Its main problem, nonetheless, is that it requires us to make sense of the notion of necessary existence. For if the answer to the question of why anything exists is that some other things of a similar kind exist, the question merely arises repeatedly, in that 'God', who ends the question must exist necessarily: It must not be an entity of which the same kinds of questions can be raised. The other problem with the argument is attributing concern and care to the deity, not for connecting the necessarily existent being it derives with human values and aspirations.

The ontological argument has been treated by modern theologians such as Barth, following Hegel, not so much as a proof with which to confront the unconverted, but as an explanation of the deep meaning of religious belief. Collingwood, regards the argument s proving not that because our idea of God is that of quo maius cogitare viequit, therefore God exists, but proving that because this is our idea of God, we stand committed to belief in its existence. Its existence is a metaphysical point or absolute pre-supposition of certain forms of thought.

In the 20th century, modal versions of the ontological argument have been propounded by the American philosophers Charles Hertshorne, Norman Malcolm, and Alvin Plantinga. One version is to define something as unsurpassable distinguished, if it exists and is perfect in every 'possible world'. Then, to allow that it is at least possible that an unsurpassable great being existing. This means that there is a possible world in which such a being exists. However, if it exists in one world, it exists in all (for the fact that such a being exists in a world that entails, in at least, it exists and is perfect in every world), so, it exists necessarily. The correct response to this argument is to disallow the apparently reasonable concession that it is possible that such a being exists. This concession is much more dangerous than it looks, since in the modal logic, involved from possibly necessarily 'p', we can device necessarily 'p'. A symmetrical proof starting from the assumption that it is possibly that such a being does not exist would derive that it is impossible that it exists.

The doctrine that it makes an ethical difference of whether an agent actively intervenes to bring about a result, or omits to act in circumstances in which it is foreseen, that as a resultant of omissions, the same result occurs. Thus, suppose that I wish you dead. If I act to bring about your death, I am a murderer, however, if I happily discover you in danger of death, and fail to act to save you, I am not acting, and therefore, according to the doctrine of acts and omissions not a murderer. Critics implore that omissions can be as deliberate and immoral as I am responsible for your food and fact to feed you. Only omission is surely a killing, 'Doing nothing' can be a way of doing something, or in other worlds, absence of bodily movement can also constitute acting negligently, or deliberately, and defending on the context, may be a way of deceiving, betraying, or killing. Nonetheless, criminal law offers to find its conveniences, from which to distinguish discontinuous intervention, for which is permissible, from bringing about resultant amounts from which it may not be, if, for instance, the result is death of a patient. The question is whether the difference, if there is one, is, between acting and omitting to act be discernibly or defined in a way that bars a general moral might.

The double effect of a principle attempting to define when an action that had both good and bad results is morally permissible. I one formation such an action is permissible if (1) The action is not wrong in itself, (2) the bad consequences are not that which is intended (3) the good is not itself a result of the bad consequences, and (4) the two consequential effects are commensurate. Thus, for instance, I might justifiably bomb an enemy factory, foreseeing but intending that the death of nearby civilians, whereas bombing the death of nearby civilians intentionally would be disallowed. The principle has its roots in Thomist moral philosophy, accordingly. St. Thomas Aquinas (1225-74), held that it is meaningless to ask whether a human being is two tings (soul and body) or, only just as it is meaningless to ask whether the wax and the shape given to it by the stamp are one: On this analogy the sound is ye form of the body. Life after death is possible only because a form itself does not perish (pricking is a loss of form).

And is, therefore, in some sense available to reactivate a new body, therefore, not I who survive body death, but I may be resurrected in the same personalized body y that becomes reanimated by the same form, that which Aquinas's account, as a person has no privileged self-understanding, we understand ourselves as we do everything else, by way of sense experience and abstraction, and knowing the principle of our own lives is an achievement, not as a given. Difficult as this point led the logical positivist to abandon the notion of an epistemological foundation altogether, and to flirt with the coherence theory of truth, it is widely accepted that trying to make the connection between thought and experience through basic sentence s depends on an untenable 'myth of the given

The special way that we each have of knowing our own thoughts, intentions, and sensationalist have brought in the many philosophical 'behaviorist and functionalist tendencies, that have found it important to deny that there is such a special way, arguing the way that I know of my own mind inasmuch as the way that I know of yours, e.g., by seeing what I say when asked. Others, however, point out that the behaviour of reporting the result of introspection in a particular and legitimate kind of behavioural access that deserves notice in any account of historically human psychology. The historical philosophy of reflection upon the astute of history, or of historical, thinking, finds the term was used in the 18th century, e.g., by Volante was to mean critical historical thinking as opposed to the mere collection and repetition of stories about the past. In Hegelian, particularly by conflicting elements within his own system, however, it came to man universal or world history. The Enlightenment confidence was being replaced by science, reason, and understanding that gave history a progressive moral thread, and under the influence of the German philosopher, whom is in spreading Romanticism, came Gottfried Herder (1744-1803), and, Immanuel Kant, this idea took it further to hold, so that philosophy of history cannot be the detecting of a grand system, the unfolding of the evolution of human nature as witnessed in successive sages (the progress of rationality or of Spirit). This essential speculative philosophy of history is given an extra Kantian twist in the German idealist Johann Fichte, in whom the extra association of temporal succession with logical implication introduces the idea that concepts themselves are the dynamic engines of historical change. The idea is readily intelligible in that their world of nature and of thought becomes identified. The work of Herder, Kant, Flichte and Schelling is synthesized by Hegel: History has a conspiracy, as too, this or to the moral development of man, but whichever equation resolves a freedom, will be the development of thought, or a logical development in which various necessary moment in the life of the concept are successively achieved and improved upon. Hegel's method is at it's most successful, when the object is the history of ideas, and the evolution of thinking may march in steps with logical oppositions and their resolution encounters red by various systems of thought.

Within the revolutionary communism, Karl Marx (1818-83) and the German social philosopher Friedrich Engels (1820-95), there emerges a rather different kind of story, based upon Hefl's progressive structure not laying the achievement of the goal of history to a future in which the political condition for freedom comes to exist, so that economic and political fears than 'reason' is in the engine room. Although, it is such that speculations upon the history may that it is continued to be written, notably, stays a late example, for which speculation of this kind with the nature of historical understanding, and in particular with a comparison between the methods of natural science and with the historians. For writers such as the German neo-Kantian Wilhelm Windelband and the German philosopher and literary critic and historian Wilhelm Dilthey, it is important to show that the human sciences such. As history is objective and legitimate, nonetheless they are in some way deferent from the enquiry of the scientist. Since the subjective-matter is the past thought and actions of human brings, what is needed and actions of human beings, past thought and actions of human beings, what is needed is an ability to re-live that past thought, knowing the deliberations of past agents, as if they were the historian's own. The most influential British writer on this theme signifies the philosopher and historian George Collingwood (1889-1943), whose, "The Idea of History" (1946), contains an extensive defence of the Verstehe approach, but it is, nonetheless, the explanation from their actions. However, by re-living the situation as our understanding that understanding other is not gained by the tactic use of a 'theory', enabling us to infer what thoughts or intentionality experienced, again, the matter to which the subjective-matters of past thoughts and actions, as I have a human ability of knowing the deliberations of past agents as if they were the historian's own. The immediate question of the form of historical explanation, and the fact that general laws have other than no place or any apprentices in the order of a minor place in the human sciences, it is also prominent in thoughts about distinctiveness as to regain their actions, but by re-living the situation in or thereby an understanding of what they experience and thought.

The view that everyday attributions of intention, belief and meaning to other persons proceeded via tacit use of a theory that enables me to construct these interpretations as explanations of their doings. The view is commonly held along with functionalism, according to which psychological states theoretical entities, identified by the network of their causes and effects. The theory-theory had different implications, depending on which feature of theories is being stressed. Theories may be though of as capable of formalization, as yielding predications and explanations, as achieved by a process of theorizing, as achieved by predictions and explanations, as achieved by a process of theorizing, as answering to empirically evince that is in principle describable without them, as liable to be overturned by newer and better theories, and so on. The main problem with seeing our understanding of others as the outcome of a piece of theorizing is the non-existence of a medium in which this theory can be couched, as the child learns simultaneously he minds of others and the meaning of terms in its native language.

Our understanding of others is not gained by the tacit use of a 'theory'. Enabling us to infer what thoughts or intentions explain their actions, however, by re-living the situation 'in their moccasins', or from their point of view, and thereby understanding what hey experienced and thought, and therefore expressed. Understanding others is achieved when we can ourselves deliberate as they did, and hear their words as if they are our own. The suggestion is a modern development of the 'Verstehen' tradition associated with Dilthey, Weber and Collngwood.

Much as much, is therefore, in some sense available to reactivate a new body, however, not that I, who survives bodily death, but I may be resurrected in the same body that becomes reanimated by the same form, in that of Aquinas's abstractive account, that non-religions belief, existence, necessity, fate, creation, sin, judice, mercy, redemption, God and, once descriptions of supreme Being impacted upon, there remains the problem of providing any reason for supporting that anything answering to this description exists. People that take place or come about, in effect, induce to come into being to conditions or occurrences traceable to a cause seems in pursuit of a good place to be, but are not exempt of privatized privilege of self-understanding. We understand ourselves, just as we do everything else, that through the sense experience, in that of an abstraction, may justly be of knowing the principle of our own lives, is to obtainably achieve, and not as a given. In the theory of knowledge that knowing Aquinas holds the Aristotelian doctrine that knowing entails some similarities between the knower and what there is to be known: A human's corporal nature, therefore, requires that knowledge start with sense perception. As yet, the same limitations that do not apply of bringing further the levelling stabilities that are contained within the hierarchical mosaic, such as the celestial heavens that open in bringing forth to angles.

In the domain of theology Aquinas deploys the distraction emphasized by Eringena, between the existence of God in understanding the significance, of five relevant contentions aiming at their significance. They are (1) Motion is only explicable if there exists an unmoved, a first mover (2) the chain of efficient causes demands a first cause (3) the contingent character of existing things in the world demands a different order of existence, or in other words as something that has a necessary existence (4) the extensional graduations of values of things in the world require the existence of something that is most valuable, or perfect, and (5) the orderly character of events points to a final cause, or end t which all things are directed, and the existence of this end demands a being that ordained it. All the arguments are physico-theological arguments, in that between reason and faith, Aquinas lays out proofs of the existence of God.

He readily recognizes that there are doctrines such that are the Incarnation and the nature of the Trinity, know only through revelations, and whose acceptance is more a matter of moral will. God's essence is identified with his existence, as pure activity. God is simple, containing no potential. No matter how, we cannot obtain knowledge of what God is (his quiddity), perhaps, doing the same work as the principle of charity, but suggesting that we regulate our procedures of interpretation by maximizing the extent to which we see the subject s humanly reasonable, than the extent to which we see the subject as right about things. Whereby remaining content with descriptions that apply to him partly by way of analogy, God reveals of Himself and not himself. The immediate problem availed of ethics is posed by the English philosopher Phillippa Foot, in her 'The Problem of Abortion and the Doctrine of the Double Effect' (1967). A runaway train or trolley comes to a section in the track that is under construction and impassable. One person is working on one part and five on the other and the trolley will put an end to anyone working on the branch it enters. Clearly, to most minds, the driver should steer for the fewest populated branch. But now suppose that, left to it, it will enter the branch with its five employ that is there, and you as a bystander can intervene, altering the points so that it veers through the other. Is it right or obligors, or even permissible for you to do this, thereby, apparently involving you in ways that responsibility ends in a death of one person? After all, who have you wronged if you leave it to go its own way? The situation is similarly standardized of others in which utilitarian reasoning seems to lead to one course of action, but a person's integrity or principles may oppose it.

Describing events that haphazardly took place does not of for it apprehensively deliberates, and revolve in the mind many great steps of his plan, as thought, considered, design, presence, studied, thought-out, which seeming inaccurately responsible to reason-sensitive, in that approved sanctimony of the divine. This permit we to talk of rationality and intention, which are the categories, we may apply if we conceive of them as action. We think of ourselves not only passively, as creatures that make things happen. Understanding this distinction gives forth of its many major problems concerning the nature of an agency for the causation of bodily events by mental events, and of understanding the 'will' and 'free will'. Other problems in the theory of action include drawing the distinction between an action and its consequence, and describing the structure involved when we do one thing 'by' doing additional applicative attributes. Even the planning and dating where someone shoots someone on one day and in one place, whereby the victim then dies on another day and in another place. Where and when did the murderous act take place?

Causation, least of mention, is not clear that only events are created for and of themselves. Kant refers to the example of a cannonball at rest and stationed upon a cushion, but causing the cushion to be the shape that it is, and thus to suggest that the causal states of affairs or objects or facts may also be casually related. All of which, the central problem is to understand the elements of necessitation or determinacy of the future. Events of which were thought by Hume are in themselves 'loose and separate': How then are we to conceive of others? The relationship seems not too perceptible, for all that perception gives us (Hume argues) is knowledge of the patterns that events do, actually falling into than any acquaintance with the connections determining the pattern. It is, however, clear that our conception of everyday objects is largely determined by their casual powers, and all our action is based on the belief that these causal powers are stable and reliable. Although scientific investigation can give us wider and deeper dependable patterns, it seems incapable of bringing us any nearer to the 'must' of causal necessitation. Particular examples' of puzzles with causalities are quite apart from general problems of forming any conception of what it is: How are we to understand the casual interaction between mind and body? How can the present, which exists, or its existence to a past that no longer exists? How is the stability of the casual order to be understood? Is backward causality possible? Is causation a concept needed in science, or dispensable?

The news concerning free-will, is nonetheless, a problem for which is to reconcile our everyday consciousness of ourselves as agent, with the best view of what science tells us that we are. Determinism is one part of the problem. It may be defined as the doctrine that every event has a cause. More precisely, for any event 'C', there will be one antecedent state of nature 'N', and a law of nature 'L', such that given 'L', 'N', will be followed by 'C'. But if this is true of every event, it is true of events such as my doing something or choosing to do something. So my choosing or doing something is fixed by some antecedent state 'N' and d the laws. Since determinism is universal these in turn are fixed, and so backwards to actions, for which I am clearly not responsible (events before my birth, for example). So, no events can be voluntary or free, where that means that they come about purely because of my willing them I could have done otherwise. If determinism is true, then there will be antecedent states and laws already determining such events: How then can I truly be said to be their author, or be responsible for them?

The dilemma for which determinism is for itself often supposes of an action that seems as the end of a causal chain, or, perhaps, by some hieratical set of suppositional actions that would stretch back in time to events for which an agent has no conceivable responsibility, then the agent is not responsible for the action.

Once, again, the dilemma adds that if an action is not the end of such a chain, so that, at another time, its focus is fastening convergently by its causing occurrences that randomly lack a definite plan, purpose or pattern, justly a randomizing of choice. In that no antecedent events brought it about, and in that case nobody is responsible for it's ever to occur. So, whether or not determinism is true, responsibility is shown to be illusory.

Still, there is to say, to have a will is to be able to desire an outcome and to purpose to bring it about. Strength of will, or firmness of purpose, is supposed to be good and weakness of will or bad.

A mental act of willing or trying whose presence is sometimes supposed to make the difference between intentional and voluntary action, as well of mere behaviour. The theories that there are such acts are problematic, and the idea that they make the required difference is a case of explaining a phenomenon by citing another that raises exactly the same problem, since the intentional or voluntary nature of the set of volition now needs explanation. For determinism to act in accordance with the law of autonomy or freedom is that in ascendance with universal moral law and regardless of selfish advantage.

A categorical notion in the work as contrasted in Kantian ethics show of a hypothetical imperative that embeds of a commentary which is in place only givens some antecedent desire or project. 'If you want to look wise, stay quiet'. The injunction to stay quiet only applies to those with the antecedent desire or inclination: If one has no enacting desire upon considerations for being wise, may, that the injunction or advice lapse. A categorical imperative cannot be so avoided; it is a requirement that binds anybody, regardless of their inclination. It could be repressed as, for example, 'Tell the truth (regardless of whether you want to or not)'. The distinction is not always mistakably presumed or absence of the conditional or hypothetical form: 'If you crave drink, don't become a bartender' may be regarded as an absolute injunction applying to anyone, although only activated in the case of those with the stated desire.

No comments:

Post a Comment