May 26, 2010

-page 26-

The death-of-epistemology movement has many sources: In the pragmatists, particularly James and Dewey, and in the writings of Wittgenstein, Quine, Sellars and Austin. But the project of theoretical diagnosis must be distinguished from the ‘therapeutic’ approach ti philosophical problems that some names on the list of theoretical diagnosis does not claim that the problem he analyses are ‘pseudo-problems’ rooted in ‘conceptual confusion’: Rather, rooted and claims that, while genuine, they are wholly internal to a particular intellectual project whose generally unacknowledged theoretical commitments he aims to isolate and express criticism.


Turning to details, the task of epistemology, as these radical critics conceive it, is to determine the nature, scope and limits, indeed the very possibility of human knowledge. Since epistemology determines the extent to which knowledge is possible. It cannot itself take for granted the results of any particular forms of empirical inquiry. Thus epistemology purports to be a non-empirical discipline, the function of which is to sit in judgement on all particular discursive practices with a view to determining their cognitive status. The epistemologists (or, in the rea of epistemological-centred philosophy, we might as well say ’the philosophers’) is someone professionally equipped to determine what forms of judgement are ‘scientific’, ‘rational’. ‘merely expressive’, and so forth. Epistemology is therefore fundamentally concerned with sceptical questions. Determining the scope and limits of human knowledge is a matter of showing where and when knowledge is possible. But there is a project called ‘showing that knowledge is possible’ only because there are powerful arguments for the view that knowledge is impossible. yet, the scepticism in question is firs t and foremost radical scepticism, the theses, with respect to this or that area of putative knowledge we are never so much as justified in believing one thing rather than another. The task of epistemology is thus to determine the extent in which bit is possible to respond to the challenge s posed by radical sceptical arguments by determining where we can and cannot have justifications for our beliefs. If in turns out that the prospects are most hopeful for some sorts of beliefs than for others, we will have uncovered a difference in epistemological status. The ‘scope and limit’ question and problem of radical scepticism are two sides of one coin.

This emphasis on scepticism as the fundamental problem of epistemology may strike some philosophers s misguided. Much recent work on the concept of knowledge, particularly that inspired by Gettier’s demonstration of the insufficiency of the standard ‘justified true analysis’, has been carried on independently of any immediate concern with scepticism. I think it must be admitted that philosophers who envisage the death of epistemology tend to assume a somewhat dismissive attitude to work of this kind. In part, this is because they tend to be precise necessary and sufficient conditions for the application of any concept. But the determining factor is their thought that only the centrality of the problem of radical scepticism can reexplain the important for philosophy that, at least in the modern period, epistemology has taken on. Since radical scepticism concerns the very possibility of justification, for philosophers who put this problem first, questions about what special sorts of justification yield knowledge, or about whether knowledge might b e explained in non-justificational terms, are of secondary importance. Whatever importance they have will have to derive the end from connections. If any, with sceptical problems.

In light of this, the fundamental question for death-of-epistemology theorists becomes, ‘What are the essential theoretical presuppositions of arguments for radical scepticism? Different theorists suggest different answers. Rorty traces scepticism to the ‘representationalist’ conception of belief and its close ally, the correspondence theory of truth. According to Rorty, if we think of beliefs as ‘representations’, that aim to correspond with mind-independent ‘reality’ (mind as the mirror of nature), we will always face insuperable problems when we try to assure ourselves that the proper alignment has been achieved. In Rorty’s view, by switching to a more ‘pragmatic’ or ‘behaviouristic’ conception of beliefs as devised for coping with particular, concrete problems, we can put scepticism, hence the philosophical discipline that revolves around it, behind us once and for all.

Other theorists stress epistemological foundationalism as the essential background to traditional sceptical problems. There are reasons for preferring this approach. Arguments for epistemological conclusions require at least one epistemological premiss. It is, therefore, not easy to see how metaphysical or semantic doctrines of the sort emphasized by Rorty could, by themselves, generate epistemological problems, such as radical scepticism. On the other hand, the case for scepticism’s essential dependence on foundationalist preconceptions is by no means easy to make. It has even been argued that this approach ‘gets things almost entirely upside down’. The thought it has, is that foundationalism is an attempt to save knowledge from the sceptic, and is therefore a reaction to, rather than a presupposition of, the deepest and most intuitive arguments for scepticism. Challenges like this certainly need to be met by death-of-epistemology theorists, who have sometimes been too ready to take for obvious asceticism’s dependence on foundationalist, or other theoretical ideas. This reflects, perhaps the dangers of taking one’s cue from historical accounts of the development of sceptical problems. It may be that, in the heyday of foundationalism, sceptical arguments were typically presented within a foundationalist context. But the crucial question is not whether some sceptical arguments do take foundationalism for granted but whether there are any that do not. This issue - indeed, the general issue of whether scepticism is a truly intuitive problem – can only be resolved by detailed analysis of the possibilities and resources of sceptical argumentation.

Another question concern why anti-foundationalism leads to the death of epistemology than a non-foundational. Hence ‘coherentist’ approach to knowledge and justification. It is true that death-of-epistemology theorists often characterize justification is to make a negative point. According to foundationalism, our beliefs fall naturally to foundationalism, our belief categories that reflect objectives context-independent relations of epistemological priority. Thus,, for example, experimental beliefs are thought to be naturally or intrinsically prior to beliefs about the natural world. This relation of epistemic priority is, so to say, just a fact. Foundationalism is therefore committed to a strong form of ‘realism’ about epistemological facts and relations, call it ‘epistemological realism’. For some anti-foundationalists, talk of coherence is just a way of rejecting the picture in favour of the view that justification is a matter of accommodating new beliefs to relevant background beliefs in contextually appropriated ways, there being no context-independent, purely epistemological restrictions on what sorts of beliefs can confer evidence on what others. If this is all that is meant, talk of coherence does not point to a theory of justification so much as too the deflationary view that justification is not the sort of thing we should expect to have theories about. There is, however, a stronger sense of a genuine theory. This is the radically holistic account of justification, according to which inference depends on assessing our entire belief-system or ‘total view’, in the light of abstract criteria of ‘coherence’. But it is questionable whether this view, which seems to demand privileged knowledge of what we believe is an alternative to foundationalism or just a variant form. Accordingly, it is possible that a truly uncompromising anti-foundationalism will prove as hostile to traditional coherence theories as to standard foundationalist positions, reinforcing the connection between the rejection of foundationalism and the death of epistemology.

The death-of-epistemology movement has some affinities with the call for a ‘naturalized’ approach to knowledge. Quine argues that the time has come for us to abandon such traditional projects as refuting the sceptic by showing how empirical knowledge can be rationally reconstructed on a sensory basis, hence justifying empirical knowledge at large. We should concentrate instead on the more tractable problem of explaining how we ‘project our physics from our data’, i.e., how retinal stimulations cause us to respond with increasingly complex sentences about events in our environment. Epistemology should be transformed into a branch of natural science, specifically experimental psychology. But though Quine presents this as a suggestion about how to continue doing epistemology, to philosophers who think that the traditional questions still lack satisfactory answers, it looks more like abandoning epistemology in favour of another pursuit entirely. It is significant, therefore, that in subsequent writings Quine has been less dismissive of sceptical concerns. But if this is how `naturalized`epistemology develops then for the death-of-epistemology theorist, its claims will open up a new field for theoretical diagnosis.

Even so, the scepticism hypothesis is designed to impugn our knowledge of empirical propositions by showing that our experience is not a reliable source of beliefs. Thus, one form of traditional scepticism developed by the Pyrrhonists, namely that reason is incapable of producing knowledge, is ignored by contemporary scepticism. Apparently, the sceptical hypothesis can be employed in two distinct ways. It can be used to show that our beliefs fall short of being certain and it can be used to show that they are not even justified. In fact, as we are to implicate that the first use depends on or upon the second.

`Letting ‘p’ stand for any ordinary belief (e.g., there ids a table before me) the first type of argument employing the sceptical hypothesis can be stared as follows:

1. If ‘S’ knows that ‘p’, then ‘p’ is certain.

2. The sceptical hypothesis shows that ‘p’ is not certain.

Therefore, ‘S’ does not know that ‘p’ is not certain.

No argument for the first premiss is needed because this first form of the argument employing the sceptical hypothesis is only concerned with cases in which certainty is though t to be a necessary condition of knowledge. Yet issues surrounding certainty are inextricably connected with those concerning scepticism. For many sceptics have traditionally held that knowledge requires certainty, and, of course, they claim that certain knowledge is not possible. In part, in order to avoid scepticism, the anti-sceptics have generally held that knowledge does not require certainty: According to which the meaning of a concept is to be sought in the experimental or practical consequences of its application. The epistemology of pragmatism is typically anti-Cartesian, fallibilistic, naturalistic. In some versions it is also realistic, in others not. In fact, Wittgenstein (1972) claims roughly, that propositions which are known are always subject to challenge, whereas, when we say that ‘p’ is certain, we are foreclosing challenges to ‘p’. As he puts it. ‘Knowledge and certainty’ belong to different categories (Wittgenstein, 1969). As such, if justification is a necessary condition of knowledge, it is suggested that it explicitly employs the premiss needed by the first argument discussed or aforementioned, as namely that ‘S is not justified in denying the sceptical hypothesis. Nonetheless, the first premiss employs a version of the co-called ‘transmissibility principle’ which probably first occurred with Edmund Gettier’s standard analysis of propositional knowledge, and is suggested by Plato and Kant among others, and implies that if one has a justified true belief that ‘p’ then one knows that ‘p’ has a three individually necessary and jointly sufficient conditions, as the ‘tripartite definition of knowledge’ stating that justification, truth and belief are justified true beliefs. The belief condition requires that anyone who knows that ‘p’ believe that ‘p’, the truth condition requires that any known proposition be true, and the justification condition requires that any known proposition be adequately justified, warranted or evidentially supported.

Such as in the second premiss of the argument is a Cartesian not in of doubt which is roughly that a proposition, ‘p’. Is doubtful for ‘S’ if there is a proposition that (1) ’S’ is not justified in denying and (2) if added to S’s beliefs , would lower the warrant of ‘p’ as it seems clear that certainty is a property that can be ascribed to either a person or a belief. However, a Cartesian characterization of a concept of absolute certainty seems the approach that is a proposition ‘p’. Is certain for ‘S’ just in case ‘S’ is warranted in believing that ‘p’ and there are absolutely no grounds whatsoever for doubting. If. now one could characterize those ground in a variety of ways (Firth,1976; Miller, 1978; Klein, 1981,1990). For example, a ground ‘g’ for making ‘p’ doubtful for ‘S’ could be such that (a)‘S’ is not warranted in denying ‘g’ and:



(B1) If ‘g’ is added to S’s beliefs, the negation of ‘p’ is warranted Or,

(B2) if ‘g’ is added t o S’s beliefs, ‘p’ is no longer warranted:

Or,

(B3) If ‘g’ is added to S ‘s beliefs, ‘p’ becomes less warranted (even if only slightly so).



Warranty might there also be an increased rather than just ‘passed on’. The coherence of probable propositions with other probable propositions with other probable propositions might (defensibly ) making them all the more evident (Firth, 1964).

Nonetheless, if belief is a necessary condition of knowledge since we can believe a proposition without believing all of the propositions entailed by it. It is clear that the principle is false. similarly, the principle entails for other uninteresting reasons. For example, if the entailment is a very complex one, ‘S’ may not be justified in believing what is entailed because ‘S’ does not recognize the entailment. In addition., “S’ may recognizes the entailment but believe the entailing in the proposition for silly reasons. But, the interesting question is this: If `S`is justified in believing (or knows) that `p`. And `p`obvious ly (to S) entails`q`, and `S`believes `q`on the basis of believing is justified in believing (or, in a position to know) that q.

Even so, Quine argued that the classical foundationalist project was a failure, both in its details and in it s conception. On the classical view, an epistemological theory would tell us how we ought to arrive at our beliefs, only by developing such a theory and then applying it could we reasonably come to believe anything about the world around us. Thus, on this classical view, an epistemological theory must be developed independently of, and prior to, any scientific theorizing: Proper scientific theorizing could only occur after such a theory was developed and deployed. This was Descartes’ view of how an epistemological theory ought to proceed, it was what he called ‘First Philosophy’. Moreover, it is this approach to epistemological issues motivated not only foundationalism, but virtually all epistemological theorizing for the next 300 years.

Quine urged a rejection of this approach to epistemological questions. Epistemology, on Quine’s view, is a branch of natural science. It studies the relationship between human beings ad their environment, in particular, it asks how it is that human beings can arrive at beliefs about the world around them on the basis of sensory stimulation, the only source of belief there is. Thus Quine commented, [sensory stimulation] and the torrential output [our total science] is a relation we are prompted to study for somewhat the same reasons that always prompted epistemology: Namely, in order to see how evidence relates to theory, and in what ways one’s theory of nature transcends any available evidence (Quine,1969), Quine spoke of this project study as ‘epistemology naturalized’.

One important difference between this approach and more traditional ones becomes plain when the two are applied to sceptical questions. On the classical view, if we to explain how knowledge is possible, it is illegitimate to make use of the resources of science: This would simply beg the question against the sceptic by making use of the very knowledge which he calls into question. Thus, Descartes’ attempt to answer the sceptic begins by rejecting all those beliefs about which any doubt is possible. Descartes must respond to the sceptic from a starting place which includes no be beliefs at all. Naturalistic epistemologists, however, understand the demand to explain the possibility of knowledge differently. As Quine argues, sceptical questions arise from within science. It is precisely our success in understanding the world, and thus in seeing that appearance and reality may differ, that raises the sceptical question in the first place . We may thus legitimately use the resources of science to answer the question which science itself has raised. The question about how knowledge is possible should thus be construed as an empirical question: It is a question about creatures such as we (given what our best current scientific theories tell us, we are like) ma y come to have accurate beliefs about the world (given what our best current scientific theories tell us the world is lik0e), Quine suggests that the Darwinian account of the origin of species gives a very general explanation of why it is that we should be well adapted to getting true beliefs about our environment (Stich, 1990), although Quine himself does no t suggest it, in that investigations in the sociology of knowledge are obviously relevant as well.

This approach into sceptical questions clearly makes them quite tractable, and its proponents see this, understandably, as an important advantage of the naturalistic approach. It is in part for this reason that current work in psychology and sociology is under such close scrutiny by many epistemologists. By the same token, the detractors of the naturalistic approach argue that this way of dealing with sceptical questions simply bypasses the very question which philosophers have long dealt with. Far from answering the traditional sceptical question. It is argued, the naturalistic approach merely changes the topic (e.g., Stroud, 1981). Debates between naturalistic epistemologists and their critics thus frequently focus on or upon whether this new way of doing epistemology adequately answers, transforms or simply ignores the questions, which others see as central to epistemological inquiry. Some are the naturalistic approach as an attempt to abandon the philosophical study of knowledge.

Precisely what the Quinean project amounts to is also a subject of some controversy. Both those who see themselves as opponents of naturalistic epistemology and those who are eager to sign onto the project frequently disagree about what the project is. The essay of Quine’s which prompted this controversy (Quine, 1969) leaves a great deal of room for interpretation.

At the centre of this controversy is the issue of the normative dimension of epistemological inquiry. Philosophers differ regarding the sense. if any, in which epistemology is normative (roughly, valuational). But what precisely is at stake is this controversy is no clearer than the problematic fact/value distinction itself. Much epistemologists as such make judgements of value or epistemic responsibility? If epistemology is naturalistic, then even epistemic principles simply articulate under what conditions - say, appropriate perceptual stimulation - a belief is justified, or constitutes knowledge. Its standards of, e.g., resilience for bridges. It is not obvious, however, that the appropriated standards can be established without independent judgements that, say, a certain kind of evidence is good enough for justified belief (or knowledge). The most plausible view may be that justification is like intrinsic goodness: Though it supervenes on natural properties, it cannot be analysed wholly in factual statements.

Perhaps, the central role which epistemological theories have traditionally played is normative. Such theories were meant not merely to describe the various processes of belief acquisitions and retention, but to tell us which of these processes we ought to be using. By describing his preferred epistemological approach as a ‘chapter of psychology and hence of natural science’ (Quine, 1969). Quine has encouraged many to interpret his view as a rejection of the normative dimension of epistemological theorizing (Goldman. 1986; Kim, 1988). Quine has, however, since repudiated this reading: Naturalization of epistemology does not jettison the normative and settle for the indiscriminate description of ongoing procedure’ (Quine, 1986 & 1999)

Unfortunately, matters are not quite a simple as this quotation makes things seem, Quine goes on to say, ‘For me, normative epistemology is a branch of engineering. It is the technology of truth-seeking, . . . There is no question as of th e ultimate value, as in morals: It is a matter of efficacy for an ulterior end, truth or prediction. The normative, as elsewhere in engineering, becomes descriptive when the terminal parameter is expressed’ (Quine, 1986). But this suggestion, brief as it is, is compatible with a number of different approaches.

On one approach, by Alvin Goldman (Goldman, 1968). Knowledge is just true belief which is produced by a reliable process, that is, a process which tends to produce true beliefs. In so much as, the view that a belief acquires favourable epistemic status by having some kind of reliable linkage to the truth. Variations of this view have been advanced for both knowledge and justified belief. The first formulation of a reliable account of knowing appeared in a note by F.P. Ramsey (1931), who said that a belief is knowledge if it is true, certain and obtained by a reliable process. P. Unger (1968) suggested that `S`knows that `p`just in case it is not at all accidental that `S`is right about its being the case that `p`. D.M. Armstrong (1973) drew an analogy between a thermometer that reliably indicates the temperature and a belief that reliably indicates the truth. Armstrong said that a non-inferential belief qualifies as knowledge if the belief has properties that are nominally sufficient for its truth, i.e., guarantee its truth according to the laws of nature.

Yet, the `technological`question arises in asking which processes tend to produce true belief. Questions of this sort are clearly part of natural science, but there is also the account of knowledge itself. On Goldman`s view, the claim that knowledge is reliably produced true belief is arrived at independent of, and prior to, scientific investigation: It is a product of conceptual analysis. Given Quine `s rejection of appeals to meaning, the analytic-synthetic distinction, and thus the very enterprise of conceptual analysis, this position is not open to him. Nevertheless, it is for many and attractive way of allowing scientific theorizing to play a larger role in epistemology than it traditionally has, and thus one important approach which might reasonably be thought of as a naturalistic epistemology.

Those who eschew conceptual analysis will need another way of explaining how the normative dimension of epistemology arises within the context of empirical inquiry. Quine says that this normative is not mysterious once we recognize that it `becomes descriptive when the terminal parameter is expressed`. But why is it conduciveness to truth. Than something else, such as survival, which is at issue here. Why is it that truth counts as the goal for which ewe should aim. Is this merely a sociological point, that people do seem to have this goal. Or is conduciveness to truth itself instrumental to other goals in some way that makes it of special pragmatic importance. It is not that Quine has no way to answer these questions within the confines of the naturalistic position he defines, rather that there seem to be many different options open, such that which is needed of further exploration and elaborations.

A number of attempts to fill in the naturalistic account draw a close connection between how people actually reason and how they ought to reason, thereby attempting to illuminate the relation between th e normative and the descriptive. One view has in that these two are identical (Kornblith, 1985; Sober, 1978), that with respect to a given subject-matter ‘psychologism’ is the theory that the subject-matter in question can be reduced to, or explained in terms of, psychological phenomena., as mental acts, events, states, dispositions and the like. But different criteria of legitimacy are normally considered appropriate types of reasoning, or roles for the faculty of reason, seem to be commonly recognized in Western culture.

It is, nonetheless, that modern science gave new impetus to affirmative theorizing about rationality, it was probably, at least in part because of the important part played by mathematics in the new mechanics of Kepler, Galileo and Newton, that some philosophers though it plausible to suppose that rationality was just as much the touchstone of scientific truth as of mathematical truth. At any rate, that supposition seems to underlie the epistemologies of Descartes and Spinoza, for example, in that which observation and experiment are assigned relatively little importance compared with the role of reason. Correspondingly, it was widely held that knowledge of right and wrong is knowledge of necessary truths that are to be discovered by rational intuition in much the same way as it was believed that the fundamental principles of arithmetic and geometry are discovered, for example, Richard Price argued that rational agent void of all moral judgement, . . is not possible to be imagined`(1797).

But in modern philosophy the most influential sceptical challenge to everyday beliefs about rationality was originated by Hume. Hume argued the impossibility of reasoning from the past to the future or from knowledge about some instances of a particular kind of situation to knowledge about all instances of that kind. There would b e nothing contradictory, he claimed, in supposing both that the sun had always risen in the past and that it would not rise tomorrow. In effect, therefore, Hume assumed the only valid standards of cognitive rationality were those concerned that rationality, where in of consisting to the conformity with the laws of deductive logic, and that of rationality as exhibited by correct mathematical calculation, and the concerning aspect of whose reasoning that depends for its correctness solely on the meaning of words that belong neither to logical nor to our mathematical vocabulary thus, it would be rational to infer that, if two people; are first cousins of one another, they share, at least one grand-parent. The form of rationality is exhibited by applicative induction that conform to appropriate criteria, as in an inference from experimental data to a general theory that explains them. For example, a hypothesis about the cause of a phenomenons needs to be tested in a relevant variety of controlled conditions in order to eliminate other possible explanations of the phenomenon, and it would be irrational to judge the hypothesis to be well-supported unless it had survived a suitable set of such tests.

Deduction was not a rational procedure, on his view, because it could not be reduced to the exercise of reason in one or another of these roles as played by doing their part.

Hume’s argument about induction is often criticized for begging the question on the grounds that induction should be held to be a valid process in its own right and with its own criterions of good and bad reasoning. But this response to Hume seems just to beg the question in the opposite direction. What is needed instead, as, perhaps, to demonstrate a continuity between inductive and deductive reasoning, with the latter exhibited as a limiting case of the former (Cohen. 1989). Even so, Hume’s is no t the only challenge that defenders of inductive rationality need to rebuff. Popper has also denied the possibility of inductive reasoning, and much-discussed paradoxes about inductive reasoning have been proposed by Goodman and Hemper.

Hemper’s paradox in the study of confirmation (1945) has introduced a paradox that raises fundamental questions about what counts as confirming evidence for a universal hypothesis. To generate the paradox three intuitive principles are invoked:

1. Nicol`s Principle (after Jean Nicod, 1930): Instances of A`s that are B`s provide confirming evidence for the universal hypothesis that, all A`s are B`s: While instances of A`s that are non-B`s provide disconfirming evidence. for example, instances of ravens that are black constitute confirming evidence for the hypothesis. Àll ravens are black` while instances of non-black ravens are disconfirming.

2. Equivalence Principle. If ℯ is confirming evidence for hypothesis ‘h1' and if ‘h1' is logically equivalent to hypothesis ‘h’, then ℯ is confirming evidence for h2. For example, if instances of ravens that are black are confirming evidence that all ravens are black, they are also confirming evidence that all non-black things are non-raven, since the latter hypothesis is logically equivalent to the former.

3. A Principle of Deductive logic: A sentence of the form, All A`s are B`s is logically equivalent to one of the form, that All non-B`s are non-A`s.

Using these principles, the paradox is generate d by supposing that all the non-black things so far observed have been non-ravens. These might include white shoes, green leaves and red apples, by Nicod`s principle, this is confirming evidence for the hypothesis, All non-black things are non-ravens. (In the schematic version of Nicod`s principle. Let A`s be non-black things and B`s be non-ravens.) But by principle (3) of deductive logic, the hypothesis, All non-black things are non-ravens, is logically equivalent to, All ravens are black. Therefore by th equivalence principle (2) the fac t that all the non-black things so far observed have been non-ravens is confirming evidence for the hypothesis that all ravens are black. That is, instances of white shoes, green leaves and red apples count as evidence for this hypothesis, which seems absurd, This is Hempel`s ravens paradox.

Hume also argued, as against philosophers like Richard Price (1787), that it was impossible for any reasoning to demonstrate the moral rightness or wrongness of a particular action. There would be nothing self-contradictory in preferring the destruction of the whole world to the scratching of one’s little finger. The only role for reason in decision making was to determine the means to desired ends. Nonetheless, Price’s kind of ethical rationalism has been revived in more recent times by W.D. Ross (1930) and others. Perhaps Hume’s argument had been based on question-begging assumptions, and it may be more cogent to point out that ethical rationalism implies a unity of moral standards that is not grounded to exist in the real world.

Probabilistic reasoning is another area in which the possibility of attaining fully rational results has sometimes been queried, as in the lottery paradox. And serious doubts have also been raised (Sen. 1982) about the concept of a rational agent that is required b y classical models of economic behaviour. No doubt a successful piece of embezzlement may in certain circumstances further the purpose of an accountant, and need not be an irrational action. But is it entitled to the accolades of rationality: And how should its immorality be weighed against its utility in the scales of practical reasoning? Or, is honesty always the rationally preferable policy.

These philosophical challenges to rationality gave been directed against the very possibility of these existing valid standards of reasoning of this of that area of enquiry. They have thus been concerned with the integrity of the concept of rationality rather than with the extent to which that concept is in fact instantiated by the actual thoughts, procedure and actions of human beings. The latter of issue’s seem at first sight to be a matter for philosophical, than philosophical research. Some of this research will no doubt be concerned with the circumstances under which people fail to perform in accordance with valid principles that they have nevertheless developed or adopted, as when they make occasional mistakes in their arithmetical calculations. But there also to be room for research into the categories of the population have developed or adopted. Some of this would be research into the success with which the relevant principles have been taught, as when students are educated in formal logic or statistical theory. Some would be research into th e extent to which those who have not had any relevant education are, or are not, prone to any systematic patterns of error in their reasoning. And it is this last type of research that has claimed results with ‘bleak implications for human rationality’ (Nisbett and Borgida, 1975).

One robust result was when (Wason, 1966) logically untutored subjects are presented with four cards showing, respectively, ‘A’. ‘D’ ‘4' and ‘7', and they know that every card has a le tter on on e dide and a number on the other. They are then given the rule, ‘If a card has a vowel on one side, It has an even number on the other’, and told that their task is to say which of the cards they need to turn in order to find out whether the rule is true or false. The most frequent answers are ‘A and 4' and ‘Only A’ which are both wrong, while the right answer ‘A and 7' is given spontaneously by very few subjects. Wason interpreted this result as demonstrating that most subjects have a systematic bias towards seeking verification than falsification in testing the rule, and he regarded this bias as a fallacy of the same kind as Popper claimed to have discerned in the belief that induction could be a valid form of human reasoning.

Some of these results concern probabilistic reasoning, for example, in an experiment (Kahneman and Tversky, 1972) on statistically untutored students the subjects are told that in a certain town blue and green cabs operate in a ratio of 85 to 15 respectively. A witness identifies the cab in an accident as green and the court is told that in the relevant circumstances he says that a cab is blue when it is blue, or that a cab is green when it is green in 80 per cent of cases. When asked the probability that the cab involved in the accident was blue subjects tend to say 20 per cent. The experimenters have claimed that this robust result shows the prevalence of a systematic fallacy in ordinary people’s probabilistic reasoning, though a failure to pay attention prior probabilities and it has been argued (Saks and Kidd, 1980) that the existence of several such results demonstrates the inherent unsoundness of mandating lay juries to decide issues of fact in a court of law.

However, it is by no means clear that these psychological experimenters have interpreted their data correctly o r that the implications for human rationality are as bleak as they suppose (Cohen, 1981, 1982). For example, it might be argued that Wason’s experiment merely shows the difficulty that people have in applying the familiar rule of contraposition to artificial conditional relationships that lack any basis in causality or in any explanatory system. And as for the cabs, it might well be dispute d whether the size of the fleet to which a cab belongs should be accepted as determining a prior probability that can count against a posterior probability founded on the causal relation between a witness’s mental powers and his courtroom testimony. To count against such a posterior probability one would need a prior one that was also rooted in causality, such as the ratio in which cabs from the blue fleet and cabs from the green fleet (which may have different policies about vehicle maintenance and driver training) are involved in accidents of the kind in question. In other words, the subjects may interpret the question to concerning probabilities, not probabilities conceived as relative frequencies that may be accidental, nonetheless, it is always necessary to consider whether the dominant responses given by subjects in such experiments should be taken, on the assumption that they are correct, as indicating how the task is generally understood - instead of as indicating, on the assumption that the task is understood exactly in the way intended, what error are being made.

Finally, there is an obvious paradox in supposing that untutored human intuitions may be systematically erroneous over a wide range of issues in human reasoning. On what non-circular basis other than such intuitions can philosophers ultimately found their theories about the correct norms of deductive or probabilistic reasoning? No doubt an occasional intuition may have to be sacrificed in order to construct an adequately comprehensive system of norms. But empirical data seem in principle incapable of showing that the untutored human mind is deficient in rationality, since we need to assume the existence of this rationality - in most situations - in order to provide a basis for those normative theories in terms of which we feel confident in criticizing occasional errors of performance in arithmetical calculations, and so forth.

There has been a steady stream of two-way traffic between epistemology and psychology. Philosophers and psychologists have relied on novel epistemological doctrines and arguments to support psychological views, more recently, epistemologists have been drawn to psychology in an attempt to solve their own problems.

It is, nonetheless, that many epistemological disagreements within psychology pertain in some way or other to disputes about ‘behavioiuralism’. The epistemological argument most widely used by behaviouralists turns on the alleged unobservability of mental events or states. If cognitions are unobservable in principle, the argument runs, we have no warrant for believing that they exist and, hence, no warrantably accepting to cognitive explanations. The same argument applies to non-cognitive mental states, such as sensations or emotions. Opponents of behaviouralism sometimes reply that mental states can be observed. Each of us, through ‘introspection’, can observe at least some mental states, namely our own (at least those of which we are conscious). To this point, behaviouralists have made several replies, some (e.g., Zuriff, 1985) argue that introspection is too unreliable for introspective reports too qualify as firm scientific evidence. Others have replied that, even if introspection is private and that this fact alone renders introspective data unsuitable as evidence in a science of behaviour. A more radical reply, advanced by certain philosophers, is that introspection is not a form of observation, but rather a kind of theorizing. More precisely, when we report on the basis of introspection, that we have a painful sensation, a thought, a mental imag e, and so forth, we are theorizing about what is present. The resulting view, the fact that we introspect does on this view, the fact that we introspect does no t show that any mental states are observable.

Given by our inherent perception of the world, is only after a long experience that one is visually to identify such things in our familiar surroundings that do not typically go through such a process known as the relevance of perceptual identifiability. However, the perceptual knowledge of the expert is still dependent, of course, since even an expert cannot see what kind of flower it is, nonetheless, it is to say, that the expert has developed identificatory skills that no longer require the sort of conscious inferential processes that characterize a beginner’s efforts. Much of our perceptual knowledge - even (sometimes) the most indirect and derived forms of it - does not mean that learning is not required to know in this way. That these sensory facts are, so to speak, are right up against the mind’s eye, and one cannot be mistaken about the conveying facts in the mind, as for these facts are, in reality, facts about the way things appear to be. Normal perception of external conditions, are, then, turning to be (always) a type of indirect perception. Such by seeing that the appearances (of the tomatoe) and inferring (this is typically said to be automatic and unconscious), on the basis of certain background assumptions (e.g., that there typically is a tomatoe in front of one when one has experiences of this sort) that there is a tomatoe in front of one. All knowledge of an objective reality, then, even what commonsense regards as the most direct perceptual knowledge, is based on a even more direct knowledge of the appearances.

Fo r the representationalist, then, perceptual knowledge of our physical surroundings is always theory-loaded and indirect. Such perception is ‘loaded’ with the theory that there is some regular, some uniform, correlation between the way things appear (known in a perceptually direct way) and the way things actually are (known, known at all, in a perceptually indirect way).

Another view, as direct realism, refuses to restrict direct perceptual knowledge to an inner world of subjective experience. Though the direct realist is willing to concede that much of our knowledge of the physical world is indirect, however direct and immediate it may sometimes feel, some perceptual knowledge of physical reality is direct. What makes it direct is that such knowledge is not based on., nor in any way dependent on, other knowledge and belief. The justification needed for the knowledge is right there in the experience itself.

Too understand the way that is supposed to work, consider an ordinary example, for which of ‘S’ identifies a banana (learns that it is a banana) by noting its shape and colour - perhaps, even tasting and smelling it (to make sure it is not wax). In this case the perceptual knowledge that is a banana is (the direct realist admits) indirect, dependent on S’s perceptual knowledge of its shape, colour, smell and taste. ‘S’ learns that it is a banana by seeing that it is yellow, banana-shaped, and so on. Nonetheless, S’s perception of the banana’s colour and shape is not indirect. ‘S’ does not see that the object is yellow, for example, by seeing (knowing, believing) anything more basic - either about the banana or anything else, e.g., his own sensations of the banana, for ‘S’ has learned to identify such features, that, of course, what ‘S’ learned to do is not make an inference, even a unconscious inference, from other things he believes. What ‘S’ acquired was a cognitive skill, a disposition to believe of yellow objects he saw that they were yellow. The exercise of this skill does not require, and in no way depends on, the having of any other beliefs. ‘S’ identifcatory success will depend on his operating in certain special conditions, of course, ‘S’ will not, as, perhaps, be able to visually identify yellow object s in drastically reduced lighting, at funny viewing angles, or when afflicted with certain nervous disorders. But these facts about when ‘S’ can see that something is yellow does not show that his perceptual knowledge (that ‘a’ is yellow) in any way depends on a belief (let alone knowledge) that he is in such special conditions. It merely shows that direct perceptual knowledge is the skill of exercising a skill, an identificatory skill that like any skill requires certain conditions for its successful exercise. An expert basketball player cannot shoot accurately in a hurricane. He needs normal conditions to do what he has learned to do. So also with individuals who have developed perceptual (cognitive) skills. They need normal conditions to do what they have learned to do. They need normal conditions to see, for example, that something is yellow. But they don’t, any more than the basketball player, have to know they are in the conditions to do what being in these conditions enables then to do.

This means, of course, that for the direct realist direct perceptual knowledge is fallible and corrigible. Whether ‘S’ sees that ‘a; is ‘F’ depends on his being caused to believe that ‘a’ is ‘F’ in conditions that are appropriate for an exercise of that cognitive skill. If conditions are right, then ‘S’ sees (hence, knows) that ‘a’ is ‘F’. If they aren’t, he doesn’t. Whether or not ‘S’ knows depends, then, not on what else (if any thing) ‘S’ believes, but on the circumstances in which ‘S’ comes to believe. This being so, this type of direct realism is a form of an externalized world.

Nonetheless, epistemologists often use the distinction between internalised and externalised theories of epistemic justification without offering any very explicit explication. However, the distinction has been mainly applied to theories of epistemic justification. It has also been applied in a closely related way to accounts of knowledge and in a rather different way in accounts of belief and thought content. Also, on this way of drawing the distinction, a hybrid view, as according to which some of the factors required for justification must be cognitively accessible while others need not and in general will not be, would count as an externalized view. Obviously too, a view that was externalist in relation to a strong version of internalism (by not requiring that the believer actually be aware of all justifying factors) could still be internalized in relation to a weak version (by requiring that he at least, be capable of becoming aware of them). However, most prominent recent externalist views have been versions of ‘reliabilism’, whose main requirement for justification is roughly that the belief be produced in a way of or the convergent process that makes it objectively likely that the belief is true (Goldman, 1986)

What makes such a view externalist is the absence of any requirements that the person for whom the belief is justified have any sort of cognitive access to the relation of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be true, but will, on such an account, nonetheless, be epistemically justified in accepting it. Thus, such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, rather than offering a compelling account of the same concept of epistemic justifications with which the traditional epistemologist is concerned has simply changed the subject.

The general line of argument for externalism points out that internalist views have conspicuously failed to provide defensible non-sceptical solutions to the classical problems of epistemology. A in striking contrast, however, such problems are in general easily solvable on an externalized view. For example, Goldman (1986) offers a one-page solution, in a footnote, of the same problem of induction. Thus, if we assume both that the various relevant forms of scepticism are false and that the failure of internalist views so far is unlikely to be remedial in the future. We have good reason to think that some externalist view is true. obviously the cogency of this argument depends on the plausibility of the assumptive arguments. An internalist can reply, that it is not obvious that internalist epistemology is designed to failure that the failure, that the explanation for the present lack of success may simply be the extreme difficulty of the problem in question. As it can be argued that most or even all of the appeal of the assumption that the various forms of scepticism are false and depends essentially on the intuitive conviction that we do not have reasons in our grasp for thinking that the various beliefs questioned by the sceptic are true - a conviction that the proponent of this argument must of course reject.

The main objection to externalism rests on the intuition that the basic requirement for epistemic justification is that the acceptance of the belief in question be rational or responsible in relation to the cognitive goal of truth, which seems to require in turn that the believer actually be aware of a reason for thinking that the belief is true (or at the very least, that such a reason be available to him). Since the satisfaction of an externalist condition is neither necessary nor sufficient for the existence of such a cognitively accessible reason: It is argued, externalism is mistaken as an account of epistemic justification. This general point has been elaborated by the sorts of putations intuitive to counterexamples of externalism. However, of these sorted challenges are plainly necessary of the externalist conditions for epistemic justification by appealing to examples of belief for which the intuitive to be justified, but for which the standard examples of this sort are cases where beliefs are produced in some very non-standard way, e.g., by a Cartesian demon, but nonetheless, in such a way that the subjective experience of the believer is indistinghable from that of someone whose beliefs are produced more general, this sort can be constructed with which any of the standard externalist condition, e.g., that the belief be a result of a reliable process, fail to be satisfied. The intuitive claim is that the believer in such a case is nonetheless, epistemically justified, as much so as one whose belief is produced in a more normal way, and hence, that externalist accounts of justification must be mistaken.

A view in this same general vein, one that might be described as a hybrid of internalism and externaslism (Swain, 1981 and Alson, 1989), holds that epistemic justification requires that there be a justificatory factor that is cognitively accessible to the believer in question (though it need not be actually grasped), thus ruling out, e.g., is pure reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a fact or is available are likely to be true, this further fact need not be in any way grasped or cognitively accessible to the believer. In effect these premises’s needed to be argued, that a particular belief is likely to be true. One must be accessible in a way that would satisfy at least weak internalism, while having to be (and normally will be) purely external. At this point, the internalist will respond that this hybrid view is of no help at all in meeting the objection that the belief is not held in the rational, responsible in ways that justification may intuitively seems to require for the believer in question, lacking one crucial premiss, still has no reason for not all for thinking that his belief is likely to be true.

An alternative to giving an externalized account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., is a result of a reliable process and, perhaps, further conditions as well. This make s it possible for such a view to retain an internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.

Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adults possess knowledge, through most weaker of convictions (if such conviction even exits) that such individuals are epistemically justified in their beliefs. It is also, at least less vulnerable to internalist counterexamples of the sorts discussed, since the intuition involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge is supposed to have. In particular, doesn’t have any serious bearing on traditional epistemological problems and on the deeper and most troubling versions of scepticism, which seem in fact to be primarily concerned with justification, rather than knowledge.

As with justification and knowledge, the traditional view of content has been strongly internalist in character. The main argument for externalisms derives from the philosophy of language, more specifically from the various phenomenon pertaining to natural kind terms as, indexical, and so forth, that motivate the views that have come to be known as ‘direct reference ‘ theories. Such phenomenons seem at least to show that the belief or thought content that can be property attributed to a person is dependent on facts about the environment -e.g., whether he is on Earth or Twin Earth what in fact he is pointing at the classificatory criteria employed by the experts in his social group, and so on - not just on what is going on internally in his mind or brain (Putnam, 1975 and Burge, 1979).

An objection to externalist accounts of content is that they seem unable to do justice to our ability to know the contents of our beliefs or thoughts from the inside, simply by reflection. If content is dependent on external factors pertaining to the environment, then knowledge of content should depend on knowledge of content factors - which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification in the following way: If part of all of the content of a belief’s inaccessible to the believer, then both the justifying status of other beliefs in relation to that content and the status of that content as justifying further beliefs will be similarly inaccessible. Thus contravening the internalist requirement for justification. An internalist must insist that there are no justification relations of these sorts, that only internally accessible content can either be justified of justly anything else, but such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.

Direct perception of objective facts, pure perceptual knowledge of external events, is made possible because what is needed (by way of justification) for such knowledge has been reduced. Background knowledge - and, in particular, the knowledge that the experience does, indeed, suffer for knowing - isn’t needed.

This means that the foundation of knowledge are fallible, nonetheless, though fallible, they are in no way derived. That is what makes them foundations, even if they are brittle, as foundations sometimes are, everything else rests on or upon them.

As it is, direct realism is in of assuming that objects of realism exist independently of any mind that might perceive them: And so it thereby rules out all forms of idealism and phenomenalism, which hold that there are no such independently existing objects. Its being a ‘direct’ realism rules out those views defended under the rubic of ‘critical realism’, or ‘representative realism’ in which there is some non-physical intermediary - usually called a ‘sense-datum’ or a ‘sense-impression’ - that must first be perceived or experienced in order to perceive the object that exists independently of this perception. Often the distinction between direct realism and other theories of perception is explained more fully in terms of what is ‘immediately’ perceived, rather than ‘mediately’ perceived. These terms are Berkeley’s who claims (1713) that one might be said to hear an electrical street car rattling down the street, but this is mediate perception as opposed to what is ‘in truth and strictness’ the immediate perception of a sound. Since the senses ‘make no inference’, the perceiver is then said to infer the existence of the electrical street car. Or to have it suggested to him by means of hearing the sound. Thus for Berkeley, the distinction between mediate and immediate perception is explained in terms of whether or not either inference or suggestion is present in the perception itself.

Berkeley went on to claim that objects of immediate perception - sounds, colours, tastes, smells, sizes and shapes - were all ‘ideas of the mind’. Yet he held that there was no further reality to be inferred from them: So that the object of mediate perception of what we would call ‘physical objects’ - are reduced to being simply collections of ideas. Thus, Berkeley uses the immediate-mediate distinction to defend ‘idealism’. A direct realist, however, can also make use of Berkeley’s distinction to define his own position. D.M. Armstrong does this by claiming that the objects of immediate perception are all occurrences of sensible qualitites, such as colours, shapes and sounds, and these are all physical existents, and not ideas or any sort of mental intermediary at all (Armstrong, 1961). Physical objects, all mediately perceived are the bearers of these properties immediately perceived.

Berkeley and Armstrong’s way of drawing the distinction between mediate and immediate perception - by reference to inference or the lack of it - houses major difficulties. There are cases in which it is plausible to assert that someone perceived a physical object - say, a tree, - even when that person was unaware of perceiving it. (We can infer from his behaviour in carefully walking around it that he did see it, even though he does not remember seeing it). Armstrong would have to say that in such cases inference was present, because seeing a tree would be a case of mediate perception: Although it would have to be an unconscious inference, but this seems baseless: There is no empirical evidence that any sort of inference was made at all.

There seems that whether a person infers the existence of something from what he perceives is more a question of talent and training than it is a question of what the nature of the objects inferred really is. For in instance, if we have three different colour samples, a trained artist might not have to infer their difference, instead, he might see their difference immediately. Someone with less colour sense, however,, might see patches ‘A’ and ‘B’ as being the same in colour, and see that ‘A’ is darker than ‘C’. On this basis, he might then infer that ‘A’ is darker than ‘B’, and ‘B’ darker than ‘C’, and its inference might present in determining difference in colour, but colour was supposed to be an object of immediate perception. On the other hand, a games keeper might no t have to infer that the animal he sees as placed within the grounds of the Metro Zoo in Toronto, who sees a black panther, he sees it to be such as straightaway. Someone unfamiliar with the Toronto Zoo and the animalized placements, however, he might have to infer this from the creature’s markings in identifying it. Hence , inference had not been present in cases of perceiving physical objects, are yet in perceiving of these objects of physical objects was supposed to be mediate perception.

A more straightforward way to distinguish between different objects of perception was advanced b y Aristotle in ‘De Anima’, where he spoke of objects directly or essentially perceived as opposed to those objects incidentally perceived, in of those that comprise perceptual properties, either those discerned by only one sense (the ‘proper sensibles’), such as colour, sound, taste, smell, and tactile qualities, or else those discerned by more than one sense, such as size, shape, and motion (the ‘(common sensibles’). The objects incidentally perceived are the concrete individuals which posses the perceptual properties, that is, particular physical objects .

According to Aristotle’s direct realism, we pe receive physical objects incidentally: That is, only by means of the direct or essential perception of certain properties that belong to such objects. In other words , by perceiving the real properties of things, and only in this way, can we thereby be said to perceive the things the themselves. These perceptual properties, though not existing independently of the objects that have the m, are yet held to exist independently of the perceiving subject: And, the perception of them is direct in that no mental messages have been perceived or sensed in order to perceive these real properties.

Aristotle‘s way of defining his position seems superior to the psychological account offered by Armstrong, since it is unencumbered with the extra baggage of inference or suggestion. Yet, a common interpretation of the Aristotlean view leads to grave difficulties. The interpretation identifies the property of the perceived sense organ. It is base on Aristotle’s saying that in perception the soul taken in the form of the object perceived without its matter.. On this interpretation, it is easy to think of direct realism as being committed to the view that ‘colour as seen’ or ‘sound as heard’ were independently existing of physical objects, such a view has been rightly disparaged by its critics and labelled as ‘naive realism’: For this is a view holding that the way thing look seem the way things are, even in the absence of perceives to who the y appear that way.

Similarly, such reductions could be made with regard to the other sensible properties that seemed to be perceived-dependent: sound could be reduced to sound waves, taste and smells to the particular shape’s of the molecules that lie on the tongue or enter the nose, and tactual qualities such as roughness and smoothness to structural properties of the objects felt . All of these properties would be taken to be distinct from the perceptual experience that these properties typically gives when they cause changes , that the perceiver’s sense the sense organs. When critic s complain that such a reduction on and the greenness of green and the yellowness of yellow (Campbell, 1976), the direct realism can answer this, it is by identifying different colours with distinct light waves, that we can best explain how it is that the perceiver’s in the same environment, with similar physical constituents, can cite similar colour experiences of green or of yellow.

A direct realist could claim that one directly perceives what is real only when there is no difference between the property proximately impinging on the same sense organ and that the property of the object which gives rise to the sense organ’s being affected. For colour, this would mean that the light waves reflected from the surface of the object mus t match those entering: the eyes, and for sound, it means that the sound waves emitted from the object must match those entering the ear. A difference in the property at the object from that at the same organ would result in illusions, not veridical perception. Perhaps, this is simply a modern version of Aristotle ‘s idea that a genuine perception the soul (now the sense organ) takes in the form of the perceived object.

If it is protested that illusion might also result from an abnormal condition of the perceiver, this can also be accepted, if one’s normal colour experience deviated too far from normal, even when the physical properties at the object and the sense organ were the same, then misperception or illusion would result. But such illusion could only be noted against a backdrop of veridical perception of real properties. Thus, the chance of illusion due to subjective factors need not lead to Liberal views of colour, sounds, tastes, and smell as existing merely‘ by convention’. The direct realist could implicate that there must be a break basis in veridical perception for must be a real basis in veridical perceptions for any such agreement to take place at all: And, veridical perception is best explained in terms of the direct perception of the properties of physical objects. It is explained, in other words, when our perceptual experience is caused when our perceptual experience is caused in the appropriate way.

This reply, n the part of the direct realist does not, of course, serve to refute the global sceptic, who claims that, since or perceptual experience could be just as it is without there being any real properties at all, we have no knowledge of any such properties but no view of perception alone is sufficient to refuse such global scepticism (Pitcher, 1971). For such a refutation we must go beyond a theory that claims how best to explain our perception of physical objects, and defend a theory that best explains how we obtain knowledge of the world.

In it s best-known form the adverbial the or of experience proposes that the grammatical object of a statement attributing an experience of someone being analysed as an adverb. More example.



(1) Tom is experiencing a pink square



Is rewritten as:



Rod is experiencing (pink square)-ly



This is present as an alterative to the act/object analysis, according to which the truth of a statement like (1) requires the existence of an object of experience, corresponding to it s grammatical object. A commitment to the explicit adverbiatization of statements of experience, however, essential to adverbialism. The core of the theory consists, rather, in th e denial of objects of experience (as opposed to objects of perception) coupled with the view that the role of the grammatical object in a statement of experience is to characterize more fully the sort of experience which is being attributed to the subject. The claim, then, is that the grammatical object is functioning as a modifier, and, in particular, as a ,modifier of a verb. If this is so, it is perhaps, appropriate to regard it as a special kind of adverb at the semantic level.

Nevertheless, our acquaintance with ‘experience‘ is to meet with directly (as through participation or observation) is an intricate and intimate affair as knowledge of something based on personal exposure. However familiar, it is not possible to define experience in an illuminating way, however, know what experiences are through acquaintance with some of their own, e.g., a visual experience of a given after-image, a feeling of physical nausea or a tactile experience of an abrasive surface (which might be caused by an actual surface) - rough or smooth - or which might be part of a dream, or the product of a vivid sensory imagination).

Another core feature of the sorts of experiences with which to consider is concerned of our spatial temporality in occupying a certain particular point in space and time, such that they have representational content. The most obvious cases of experience with content are sense experiences of the kind normally involved in perception. We may describe such experiences by mentioning their sensory modalities and their contents, e.g., a gustatory experience (modality) of chocolate ice cream (content), but do so more combined with noun phrases specifying their contents: As in ‘Macbeth perceived visually and ‘Macbeth had a visual experience of a dagger’.(The reading with which we are concerned).

As in the case of other mental states and events with content, it is impossible to distinguish between the properties which an experience represents and the properties which it possesses. To talk of the representational properties of an experience is to say something about its content, not to attribute those properties to the experience itself. Like every other experience, a visual experience of a pink square is a mental event, and it is therefore, presents those properties. It is, though it represent those properties, it is, perhaps, fleeting, pleasant o r unusual, even though it does not represent those properties. An experience may represent a property which it possesses and it may even do so in virtue of possessing that property, as in the case of a rapidly changing (complex) experience representing something as changing rapidly, but this is the exception and not the rule.

Which properties can be (directly) represented in sense experience is subject to debate. Traditionalists include only properties whose presence could not be doubted by a subject having appropriate experiences, e.g., colour and shape in the case of visual experience, and (apparent) shape, surface, texture, hardness, etc., in the case of tactile experience. This view is natural to anyone who has an egocentric, Cartesian perspective in epistemology, and who wishes for pure data in experience to serve as logically certain foundations for knowledge. The successors to the empiricalists’ concept of ideas of sense are the sense-data, a term introduced by Moore and Russell, and refers to the immediate objects of perceptual awareness, such as colour patches and shapes, usually supposed distinct from surfaces of physical objects. Qualities of sense-data was supposed to be distinct from physical qualities because their perception is more relative to conditions, more certain and more immediate and because sense data are private and cannot appear other than they are. They are objects that change in our perceptual fields when conditions of operations change and physical objects remain constant.

Others who do not think that this wish can be satisfied, and who are more impressed with the role of experience in providing animals with ecologically significant information about the world around them, claim that sense experience represent properties, characteristics and kinds which are much richer and much more wide-ranging than the traditional sensory qualities. We do not see only colours and shapes, they tell us, but also earth, water, men, women and fire. There is no place as to examine the factors relevant to a choice between these alternatives.

Given the modality and content of a sense experience, most of us will be aware of its character even though we cannot describe that character directly. This suggests that character and content are not really distinct and there is a close connection between them. For one thing, the relative plexuity of the character of a sense experience places limitations on its possible content, e.g., a tactile experience of something touching one’s left ear is just too simple to carry the same amount of content as a typical everyday visual experience. Furthermore, the content of a sense experience of a given character depends on the normal causes of appropriately similar experiences, e.g., the sort of gustatory experience which we have when eating chocolate would not represent chocolate unless it were normally caused by chocolate. Granting a contingent connection between the character of an experience and its possible causal origins, it again follows, that its possible content is limited by its character.

Character and content are, nonetheless, irreducibly different, for the following reasons. (1) There are experiences which completely lack content, e.g., certain bodily pleasures, and (2) not every aspect of the character of an experience with content is relevant to that content, e.g., the unpleasures of an aural experience or of chalk squeaking on a board may have no representational significance. (3) Experiences in different modalities may overlap in content without a parallel overlap in character, e.g., visual and tactile experiences of circularity feel completely different, and (4) the content of an experience with a given character may vary according to the background of the subject, e.g., a certain aural experience may come to have the content ‘singing bird’ only after the subject has learned something about birds.

According to the act/object analysis of experience (which is a special case of the act/object analysis of consciousness), every experience involves an object of experience even if it has no material object. Two main lines of argument may be offer in support of this view, one phenomenological and the other semantic.

In outline, the phenomenological argument is as follows. Whenever we have an experience, even if nothing beyond the experience answers to it, we seem to be presented with something through the experience (Which is itself diaphanous). The object of the experience is what ever is so presented to us - be it an individual thing, an event, or a state of affairs.

The semantic argument is that objects of experience are required in order to make sense of certain features of our about experience, including, in particular, the following, (1) Simple attributions of experience (e.g., Rod is experiencing a pink square) seem to be relational, and (2) we appear to refer to objects of experience and to attribute properties to them (e.g., The after-image which John experienced was green. (3) We appear to quantify over objects of experience (e.g., Macbeth saw something which his wife did not see).

The act /object analysis faces several problems concerning that status of objects of experience, currently the most common view is that they are sense-data - private mental entities which actually posses the traditional sensory qualitites s represented by the experience of which they are the object. But the very idea of an essentially private entity is suspect. Moreover, since an experience may apparently represent something as having a determinable property (e.g ., redness) without representing it as having any subordinate determinate property (e.g., any specific shade of red), a sense-data may actually have a determinable property without having a determinate property subordinate to it. Even more disturbing is that sense-data may have contradictory properties, as sense-data theorist mus t either deny that there are experiences or admit contradictory objects.

These for experiences seem not to present us with bare properties (however complex), but with properties embodied in individuals. The view that objects of experience are Meinongian objects accommodates this point. It is also attractive in so far as (1) It allows experiences to represent properties other than traditional sensory qualities, and (2) It allows for the identification of objects of experience and objects of perception in the case of experiences which constitute perception. According to the act/object analysis of experience, every experience with content involves an object of experience to which the subject is related by an act of awareness (the event of experiencing that object). This is meant to apply not only to perceptions, which have mental objects (whatever is perceived), but also to experiences like hallucinations and dream experiences, which do not. Such experiences, nonetheless appear to represent something, and their objects are supposed to be whatever it is that they represent. Act/object theories may differ on the nature of objects of experience, which have been treated as properties. Meinongian objects (which may not exist or have any form of being), and, more commonly, private mental entities with sensory qualities. (The term ‘sense-data’ is now usually applied to the latter, but has also been used as a general term for objects of sense experience, as in the work of G.E. Moore). Act/object theorists may also differ on the relationship between objects of experience and objects of perception. In terms of representative realism, objects of perception (of which we are ‘indirectly aware’) Meinongians, however, may simply treat objects of perception as existing objects of experience. But most philosophers will feel that the Meinongian’s acceptance of impossible objects is too high a price to pay for these benefits.

A general problem for the act/objedct ansalysis is that the question of whether two subjects are experiencing one and the same thing (as opposed to having exact ly similar experiences) appears to have an answer only on the assumption that the experiences concerned are perception with material objects. But in terms of the act/object analysis the question must have an answer even when this condition is not satisfied. (The answer is always negative on the sense-data theory, it could be possible on the other versions of the act/object analysis, depending on the fact of the case).

In view encapsulated of problems, the case for the act/object analysis should be reassessed. The phenomenological argument is not, on reflection, convincing, for it is easy enough to grant that any experience appears to present us with an object without accepting that it actually does. The semantic argument that is more impressive, but, nonetheless, answerable. The seemingly relational structure of attributions of experience is a challenge dealt within the adverbial theory. Apparent reference to and quantification over objects of experience can be handled by analysing them as reference to experience theme selves, and quantification over experiences tacitly typed according to content. (This, ‘The after-image which John experienced was green. John’s after-image experience was an experience of green, and Macbeth saw something which his wife did not see, because ‘Macbeth had a visual experience which his wife did not have’.

Pure cognitivism attempts to avoid the problems facing the act/object analysis by reducing experiences to cognitive events or associated disposition, e.g., Susy’s experience of a rough surface beneath her hand might be ‘identified with the event of her acquiring this belief, with a disposition to acquire in which has somehow been blocked.

This position has attractions, as it does full justice to the cognitions of experience and to the important role of experience as a source of belief acquisition. It would also help clear the way for a naturalistic theory of mind, since there seems to be some prospect of a phyicalist/fundationalist account of belief and other intentional states. But pure cognitivism is completely undermined by its failure to accommodate the fact that experiences have a felt character which cannot be reduced to the content. A launching celebration of gratifying the adverberial theory is an attempt to undermine the act/object analysis by suggesting a semantic account of attributions of experience which does not require objects of experience. Unfortunately, the oddities of explicit adverbialization of such statements have driven off potential supporters of the theory. Furthermore, the theory remains largely undeveloped, and attempted refutations have traded on this. It may, however, be founded on sound basic intuition, and there is reason to believe that an effective development of the theory of which is possible.

The relevant intuitions are (1) That when we say that someone is experiencing ‘an A’ or has an experience ‘of an A’, we are using this content-expression to specify the type of thing which the experience is especially apt to fit. (2) That doing this is a matter of saying something about the experience itself (and maybe about the normal also about the normal causes of the experience itself) and, (3) That there is no good reason to suppose that it involves the description of an object which the experience is ‘of’. Thus, the effective role of the content-expression in a statement of experience is to modify the verb it complements, not to introduce a special type of object.

Perhaps the most important criticism of the adverbial theory is the ‘many property problem’, according to which the theory does have the resources to distinguish between, e.g.,



(1) Frank has an experience of a brown triangle

And:

(2) Frank has an experience of brown and an experience of a triangle.



Which is entailed by (1) but does not entail it. The act/object analysis can easily accommodate the difference between (1) and (2) by claiming that the truth of (1) requires a single object of experience which is both brown and triangular, while that of (2) allows for the possibility of two objects of experience, one brown and the other triangular, however, that (1) is equivalent to.



(1*) Frank has an experience of something’s being both brown and triangular.



And (2) is equivalent to:



(2*) Frank has an experience of something’s being brown and an experience of something’s being triangular.



And the difference between these can be explained quite simply in terms of logical scope without invoking objects of experience. The adverbialist may use this to answer the many-property problem by arguing that the phrase’s ‘a brown triangle’ in (1) does exactly the same work as the clause ‘something’s being both brown and triangular,’ in (1*). This is perfectly compatible with the view that it also has the ‘adverbial’ function of modifying the verb ‘has an experience of’ for it specifies the experience more narrowly just by giving a necessary condition for the satisfaction of the experience (the condition being that there is something both brown and triangular before Frank).

And yet, a position of which should be mentioned is the state theory, according to which a sense experience of an ‘A’ is an occurrent, non-relational state of the kind which the subject would be in when perceiving an ‘A’. Suitably qualified, this claim is no doubt true, but its significance subject to debate. Perhaps, it is enough to remark that the claim is compatible with pure cognitivism and the adverbial theory, and that state theorists are probably best to advised to adopt adverbialism as a means of developing their intuitions.

That is to say, that most generally, intuition and deduction are quantifiable when one has intuitive knowledge, that ‘p’ when:



1: One knows that ‘p’.

2: One’s knowledge that ‘p’ is immediate, and

3: One’s knowledge that ‘p’ is not an instance of the operations of any of the five senses (so knowledge of the nature of one’s own experience of the nature of one’s own experience is not intuitive).



On this account neither mediate nor sensory knowledge is intuitive knowledge. Some philosophers, however, want to allow sensory knowledge to count as intuitive, so to do this, omit clause (3) above.

The two principal families of examples of mediate (i.e., not immediate) knowledge that have interested philosophers are, knowledge through the representation and knowledge by inference. Knowledge by representation occurs when the thing known is not what one appeals to as a basis for claiming to know it, as when one appeals to sensory phenomena as a basis for knowledge of the world (and the world is not taken to be a sense-phenomenal construct) or as when one appeals to words as a source of knowledge of the world (as when one claims that a proposition is true of the world solely by virtue of the meaning of the words expressing it).

(There are other idioms that are used to mark out the differences between non-intuitional ways of knowing, such as know ing indirectly and knowing directly, or knowing in the absence of the thing known and knowing by virtue of the presence of the thing known. It is sometimes useful to speak of the object of knowledge being intuitively given, meaning that we can know things about it without mediation. The justification of a claim to knowledge by appeal to its object being intuitively given is surely as good as could be. What could be a better basis for a claim to knowledge than the object of knowledge itself given just as it is?).

One of the fundamental problems of philosophy , overlapping epistemology and the philosophy of logic, is that of giving criteria for when a deductive inference is valid, criteria for when an inference does or can continue knowledge or truth. There are in fact two very different proposals for solutions to this problem, one that had slowly come into fashion during the early part of this century, and another that has been much out of fashion, but gaining in admirers. The former, which develops out of the tradition of Aristotelian syllogistic, holds that all valid deductive inferences can be analysed and paraphrased as follows:



The sentences occurring in the deduction are aptly paraphrased by sentences with an explicit, interpreted logical syntax, which in the main consists of expressions for logical operations, e.g., predication, negation, conjunction, disjunction, quantification, abstraction, . . .,: and

The validity of the inferences made from sentences in that syntax to sentences, in that syntax is entirely a function of the meaning of the signs for logical operations expressed in the syntax.



In particular, it is principally the meaning of the signs for logical operations that justify taking considered rules of inference as valid (Koslow, 1991). For example, is such a justification as given by Gottlob Frége (1848-1925), one of the great developers of this vie w of the nature of the proper criteria for valid deductive inference, someone who is in fact, in the late nineteenth century, gave us an interpreted logical syntax (and so a formal deductive logic) far, far greater and more powerful than had been available through the tradition of Aristotelian syllogistic:



A ➞ B is meant to be a proposition that is false when ‘A’ is true and ‘B’ is false: Otherwise, it is true (Frége. 1964) paraphrased; variables restricted to the True . . .the False.



The following is a valid rule of inference: From ‘A’ and A ➞ B, infer ‘B’, for if ‘B’ were false, since ‘A’ is true A ➞ B would be false, but it is supposed to be true (Frége, 1964, paraphrased).



Frége believed that the principal virtue of such formal-syntactical reconstructions of inferences - as validity moving on the basis of the meaning of the signs for the logical operations alone - was that it eliminated the dependence on intuition and let on see exactly on what our inferences depended, e.g.,:



We divided all truths that require justification into two kinds, those for which the proof can be carried out purely by means of logic and those for which it must be supported by facts of experience.

. . . Now, when I came to consider the question to which of these two kinds the judgment of arithmetic belong. I first had to ascertain how far one could proceed in arithmetic by means of inference alone, with the sole support of those laws of thought that transcend all particulars. . . .To prevent anything intuitive (Anschauliches) from penetrating. here unnoticed, I had to bend every effort to keep the chain of inference free from gaps (Frége, 1975).



In the literature most ready to hand, the alternative view was supported by Descartes and elaborated by John Locke, when maintained that inferences move best and most soundly when based on intuition. (their word):



Syllogism serves our Reason, in that it shows¸the connection of the Proofs, î.e., the connexion between premises and conclusion¸in any one instance and no more, but in this, of no great use. Since the Mind can perceive such connexion, where it readily is, as easily, nay, perhaps better without Syllogism¸.

If we observe the Acting of our own Minds, we shall find, that we reason best and clearest when we only observe the connexion of the Ideas, without reducing out Thoughts to any Rule of Syllogism . (Locke, 1975. p.670).



What is it that one is intuiting? Ideas, or meaning, and relationships among them. Ideas or meaning are directly given, to be directly the difference s being marked by Locke is between (1) inferring Socrates is mortal from the premises All men are mortal and Socrates is a man by appealing to the form-logical rule. All ‘A’ are ‘B’ and ‘C’ is an ‘A`, therefor C and B which is supposed to be done without any appeal to the intuitive meanings of, All and is, and (2) seeing that Socrates is moral follows from, All men are mortal and Socrates is a man by virtue of understanding (the meaning of) those informal sentences without any appeal to th e formal-logical sentences without any appeal to the formal-logical rule. Lock e is also making the point that inference made on the basis of such an understanding of meanings are better, and more fundamental, then inferences made on the basis of an appeal in a formal-logical schema. Indeed, Locke would certainly maintain that such informal, intuitive inferences made on the basis of understanding the meaning of sentences serve better as a check on the correctiveness inference. Serve as a check on intuitive inferences.

Such distrust of formal logical inference or greater trust in intuitive inference has been promoted in recent times by Henri Poincaré and L.R.J. Bouwer (Detlefsen, 1991).

We might say that for Frége, too, logical inferences moved by virtue of intuition of meaning, the meaning of the signs for logical inference, for we have seen how Frége appealed to such meanings in order to justify formal-logical rules of inference. Of course, its content, the formal-logical rules are justified, Frége is quite content to appeal to them in the construction of deduction, not returning each time to the intuited meanings of the logical signs. What is new in Frége is the conviction that inferences that proceed wholly on the basis of the logical signs, signs for logical operations, are complete with respect to logical implication - that if ‘B’ logically follows from ‘A’, then from ‘A’ by rules which mention only logical operations and not, e.g., the concrete meaning of predicate-expressions in the relevant propositions. There is a deep issue of which is destined to become the principal issue in the philosophy and epistemology of logical theory, but, to what extent, in what measure, does intuition of the non-logical content measure, does intuition of the non-logical contents of propositions, i.e., content other than the meanings of the signs for logical operations, rightly sustain inference?

But one does not really need to reach to such an example, that virtually all inferences set out in mathematical proofs most obviously proceed on the basis of intuitively given meaning content rather than appeal to formal-logical rules, and it is easy to find examples of such proofs that clearly do not depend on the meaning of signs for logical operators, but rather on the non-logical content of the mathematical propositions. There is a good example in Hilbert (1971, p 6, paraphrased).

Similar problems face the suggestion that necessary truths are the ones we know with certainty: We lack a criterion for certainty, as there are necessary truths we don’t know, and (barring dubious arguments for scepticism) it is reasonable to suppose that we know some contingent truths with certainty. Leibniz defined a necessary truth as one whose opposite implies a contradiction. Every such proposition, he held, is either an explicit identity (i.e., of the form ‘A is A’, ‘AB is B’, etc.) or is reducible to any identity by successively substituting equivalent terms. (Thus, one might be so reduced by substituting ‘unmarried man’ for ‘bachelor’. This has several advantages over the ideas of the previous ascriptions. First, it explicates the notions of necessity and possibility and seems to provide a criterion we can apply. Second, because explicit identities are self-evidently a priori propositions. The theory implies that all necessary truths are knowable a priori, but it does not entail that we actually know all of them, nor does it define ‘knowable’ in a circular way. Third, it implies that necessary truths are knowable with certainty, but does not preclude our having certainty in knowledge of contingent truths by means of contingent truths by mans other than a reduction.

Nevertheless, this view is also problematic. Leibniz`s example of reduction are too spare to prove a claim about all necessary truths. Some of his reductions. Moreover, are deficient: Frége has pointed out , for example, that his proof of `2 + 2 = 4, presupposes the principle of association and so does not depend only on the principle of identity. More generally, it has been shown that arithmetic cannot be reduced to logic, but requires the resources of set theory as well. Finally, there are other necessary propositions (e.g., `Nothing can be red and green all over`) which do not seem to be reducible to identities and which Leibniz does not show how to reduce.

Leibniz and others have thought of truths as property of propositions, where the latter are conceived as things which may be expressed by, but are distinct from, linguistic items like statements on another approach truth is a property of linguistic entities, and the basis of necessary truth is convention. Thus, A.J. Aver, for example, argued that the only necessary truths are analytic statements and that the latter rests entirely on or upon our commitment to use words in certain ways.

The general project of the positivistic theory of knowledge is to exhibit the structure, content and basis of human knowledge in accordance with empirical principles. Since sentence is regarded as the repository of all genuine human knowledge, structure, or as it was called the logic of science. The theory of knowledge thus becomes three major tasks (1) to analyse the meaning of the statements of science exclusively in terms of observations or experiences in principles available to human beings, (2) to show how certain observations or experiences serve to confirm a given statement in the sense of making it more warranted or reasonable, and (3) to show how non-empirical or a priori knowledge of the necessary truths of logic and mathematics is possible even though every matter of fact which can be intelligibly thought or known is empirically verifiable or falsifiable.

1. The slogan ‘the meaning of a statement is its method of verification’ expresses the empirical verification theory of meaning. It is more than the general criterion of meaningfulness according to which a sentence, if it is cognitively verifiable. It says in addition, that the meaning of each sentence is: It is all those observations which would confirm or disconfirm the sentence. Sentence which would be verified or falsified by all the same observations are empirically equivalent or have the same meaning.

A sentence recording the result of a single observation is an observation or ‘protocol’ sentence. it can be conclusively verified or falsified on a single occasion. Every other implies an indefinitely large number of observation sentences which together exhaust its meaning, but at no time will all of them have been verified or falsified. To give an ‘analysis’ of the statements science is to show how the content of each scientific statement can be reduced in this way, to nothing more than a complex combination of directly verifiable ‘protocol’ sentences

Verificationism, is that of any view according to which the conditions of a sentence’s or a thought’s being meaningful or intelligible are equated with the conditions of is being verified or falsified. An explicit defence of the position would be a defence of the verifiability principle of meaningfulness. The exclusiveness of a scientific world view was to be secured by showing that everything beyond the reach of science is strictly or ‘cognitively’ meaningless. In the sense of being incapable of truth or falsity, and so not possible an object of meaningfulness and it was found in the idea of empirical verification. And anything which does not fulfil this criterion is declared literally meaningless to its truth or falsity. It is not an appropriate object of enquiry. Moral and aesthetic and other confirmable nor disconfirmable on empirical grounds, and so are cognitively meaningless. They are at best expressions of feeling or preferences which are neither true nor false. Whatever is cognitively meaningful and therefore factual is value-free. The positivist claim that many of the sentences of traditional philosophy, especially those in what they called ‘metaphysics’, also lack cognitive meaning and say nothing that could be true or false. But they did spend much time trying to show this in detail about the philosophy of the past. They were more concerned with developing a theory of meaning and of knowledge adequate to the understanding and perhaps even the improvement of science.

Implicit verificationism is often present in position or arguments which do not defend that principle in general, but which reject suggestions to the effect that a certain sort of claim is unknowable or unconfirmable, on the sole ground that it would therefore be meaningless or unintelligible. Only if meaningfulness or intelligibility is indeed a guarantee of known or confirmability is the position sound. If it is , nothing we understand could be unknowable or unconfirmable by us .

2. The observations recorded in particular ‘protocol’ sentences are said to confirm those ‘hypotheses’ of which they are instances. The task of confirmation theory is therefore to deduce the notion of a confirming instance of a hypothesis and to show how the occurrence of more and more such instances adds credibility or warrant to the hypothesis in question. A complete answer would involve a solution to the problem of induction: To explain how any past or present experience makes it reasonable to believe in something that has not yet been experienced. Even so, all inferences from past or present experience to an unobserved matter of fact ‘proceed upon’ that principle. But no assurance can be given to that principle from reason alone: It is not impossible, for the future to be different from the past Whether the future will resemble the past is a contingent matter of fact. Experience is therefore needed to assure us of that principle. It cannot do so alone, since the principle partly can tell us only how things have been in the past. Something more than past experience is needed. As in the sense of implying a contradiction, for the future to be different from the past.

But reason, even when combined with past experience, cannot be what leads us to believe that the future will resemble the past. If it did, it would be by means of an inference from past experience to the principle that the future will resemble the past. And, so before, any such inference would have too ‘proceed on the supposition’ that the future will resemble the past, but that would be evidently going in a circle and taking that for granted which is the very point in question.

3. Logical and mathematical propositions and other necessary truths, are not to predict the course of future sense experience, they cannot be empirically confirmed or disconfirmed, but they are essential to science and so must be accounted for. They are one and all ‘analytic’ in something like the Kantian sense: True solely in virtue of the meanings of their constituent terms. They serve only to make explicit the contents of and the logical relations among the terms or concepts which make up the conceptual framework through which we interpret and predict experience. Our knowledge of such truths is simply knowledge of what is and what is not contained in the concepts we use.

Nonetheless, the Lockean/Kantian distinction is based on a narrow notion of concept on which concepts are senses of expressions in the generalized language. The broad Frégean/Carnapian distinction is based on a broad notion of concept on which concepts are conceptions - often scientific ones - about the nature of the referents of expressions (Katz, 1971 and Putnam, 1981). Whereas, in its conflation of these two notions of concept produce the illusion of a single concept with the content of philosophical, Logical and mathematical conceptions, but with the status of linguistic concepts. All that is necessary is to keep the original, narrow distinction from being broadened. This insures that preposition expressing the content of broad concepts cannot receive the easier justification appropriate to narrow ones, its notion allows us to pose the problem of how necessary is knowledge, if not for being of what is possible, moreover, logical and mathematical knowledge are part of the problem. By which Quine, did not undercut the foundations of rationalism, hence, a serious reproval of the new empiricism and naturalized epistemology is, to say the least, very much in order (Katz, 1990).

Experience can perhaps show that a given concept has no instances, or that it is not a justified useful concept for us to employ, but that would not show that what we understand to be included in that concept is no really included in it, or that it is not the concept we take it to be. Our knowledge of the constituents of and the relations among our concepts is the reference or not dependent on experience: It is a priori, it is knowledge of what holds necessarily, and all necessary truths are ‘analytic’, as there is no synthetic a priori knowledge to mark the distinction, one who characterizes a priori knowledge in terms of justification which is independent of experience is faced with the task of articulating the relevant sense of experience. Proponents of the a prior often cite ‘intuition’ or ‘intuitive apprehension’ as the source of a priori justification. Recent attacks on the existence of an a priori is knowledge known knowledge fall into three general camps. Some as Putnam (1979) and Kitcher (1983), begin by providing an analysis of the concept of a priori knowledge and then ague that alleged examples of a prior in knowledge and fail to satisfy the conditions specified in the analysis. Attacks in the second generality are independent processes of any particular analysis of the concept on the alleged source of such knowledge. Benacerrak (1973), for example, argues by dominants of the a priori to be the source of mathematical knowledge, but cannot fulfil that role. A third form of attack is to consider prominent examples such of our positions alleged to be knowable only a priori and to show that they can be justified by experiential evidence . The Kantian position that has received most attention is the claim that a priori knowledge is the claim that some a priori knowledge is of synthetic priori propositions. Initially, there were two different claims. That were concerned exclusively with some of Kant’s particular examples of alleged synthetic a priori knowledge only the claim that the truths of arithmetic are synthetic.

Kantian strategies that mathematical knowledge is a necessary condition of empirical knowledge. Kant argued that the laws of mathematics are actually constraints on our perception of space and time. In knowing mathematics, then, we know only the laws of our own perception. Physical space in itself, for all we know, may not obey the laws of Euclidean geometry and arithmetic, but the world as perceived by us must, that mathematics is objective - or ‘intersubjective - in the sense that it holds good of all portions of the whole human race, past, present, and future. For this reason, there is no problem with the applicability of mathematics in empirical science - or, so the Kantian claim.

In the sense of which we are to assume of some characteristic, as pending some thought to be epistemological problems, it will be difficult do not give of some needed demonstration. It is, and would seem, that to base conclusions of truth that must fail because in any such attempt be to our understanding of the truths and reasons of fact, that in this distinction is associated with Leibniz, who declares that there are only two kinds of truths - truths of reason and truths of fact. The former are either explicit identities, i.e., of the form ‘A’ is ‘A’, ‘AB is B’, and so forth, or they are reducible to this form by successively substituting the equivalent terms. Leibniz also says that truths of reason ‘rest on the principle of contradiction, or identity’ and that they are necessary propositions, which are true of all possible worlds. Some examples are, ‘All bachelors are unmarried: And ‘All bachelors are unmarried‘: The first is already of the form ‘AB is B’ and the latter case be reduced to this form, that of any substituting ‘unmarried man’ for ‘bachelor’. Other examples, or so Leibnitz believes, are ‘God exists’ and the truths of logic, arithmetic and geometry.

Truths of fact, on the other hand, cannot be reduced to an identity and our only way of know them is a posteriori, or by reference to the facts of the empirical world. Likewise, since their denial does not involve a contradiction, the truth is merely contingent: They could have been otherwise and hold of the actual world, but not of every possible one. Some examples are ‘Caesar crossed the Rubicon’ and ‘Leibniz was born in Leipzig’ as well as propositions expressing correct scientific generalizations. In Leibniz’s view, truths of fact are often the principle of sufficient reason, which states that nothing can be so unless there is a reason why it is so. This reason is that the actual world (by which the means the total collection of things past, present and future) is better than any other possible world and was therefore created by God.

Necessary contingent truths are ones which must be true or whose opposite is impossible. Contingent truths are those that are not necessary and whose opposite is therefore. 1-3 below are necessary, 4-6, contingent



(1) It is not the case that it is raining and not raining

(2) 2 + 2 = 4.

(3) All bachelors are unmarried.

(4) All seldom rains in the Sahara.

(5) There are more than four states in the US of A.

(6) Some bachelors drive Macerates.



Plantinga (1974) characterizes the sense of necessity, illustrated in 1-3 as ‘broadly logical. For it includes not only truths of logic, but those of mathematics, set theory, and other quasi-logical ones. Yet it is not so broad ass to include maters of causal or natural necessity, such as:



(7) Nothing travels faster than the speed of light.



One would like an account of our distinction and a criterion by which to apply it. Some suppose that necessary truths are those we know a priori. But we lack a criterion for a priori truths, and there are necessary for truths we don’t know at all (e.g., undiscovered mathematical ones). It won’t help to say that necessary truths are ones where it is possible, in the broadly logical sense, to know a priori, for this is circular. Finally, Kripke (1972) and Plantinga (1974) argue that some contingent truths are knowable a priori knowledge. Similar problems face the suggestion that necessary truths are the ones we know to be of certainty: We lack a criterion for certainty, there are necessary truths we don’t know and barring dubious arguments for scepticism, it is reasonable to suppose that we know some contingent truths with certainty.

Leibniz defined as necessary truth as one whose opposite implies a contradiction. Every such proposition, he held, is either an explicit identity (i.e., of the form ‘A is A’, ‘AB is B’) or is reducible to an identity by successively substituting equivalent terms. (Thus 3 above might be so reduced by substituting ‘unmarried man’ for ‘bachelor’). This has several advantages over the ideas of the previous paragraph. First, it explicates the notion of necessarily and possibility and seems to provide a criterion we can apply. Second, because explicit identities are self-evident a prior propositions, that theory implies that all necessary truths are knowable a priori that all necessary truths are knowable a priori, but it does not entail that we actually know all of them, nor does it define ‘knowable’ in a circular way. Third, it implies that necessary truths are knowable with certainty, but does not preclude our having certain knowledge of contingent truths by means other than a reduction.

Nevertheless, this view is also problematic, as Leibniz’s examples of reduction are too sparse to prove a claim about all necessary truths. Even so, Frége has pointed out, for example, that his proof of ‘2 + 2 = 4' presupposes the principle of association and so does not depend only on the principle of identity. More generally, it has been shown that arithmetic cannot be reduced to logic, but requires the resources of set theory as well. Finally, there are other necessary propositions (e.g., ‘Nothing can be red and green all over’) which do not seem to be reducible to identities and which Leibniz does not show how to reduce.

Leibniz’s account of our knowledge of contingent truths is remarkably similar to what we would expect too find in an empiricist’s epistemology. Leibniz claimed that our knowledge of particular contingent truths has its basis in sense perception. He argued that our knowledge of universal contingent truths cannot be based entirely on simple enumerative inductions, but must be supplemented by what he called ‘the conjectural method a priori’: Which he described as follows.



1. Truth is fundamentally a matter of the containment of the concept of the predicate of a proposition in the concept of its subject.

2. The distinction between necessary truth and contingent truth is absolute, and in no way relative to a corresponding distinction between divine and human sources of knowledge.

3. A proposition is known a priori by a finite mind only if that proposition is a necessary truth (Parkinson and Morris, 1973).



Hence, although Leibniz commenced with an account of truth that one might expect to lead to the conclusion that all knowledge is ultimately a priori knowledge, he set out to avoid that conclusion.

Leibniz’s rationalism in epistemology is most evident in his account of our a priori knowledge, that is, according to (3), our knowledge of necessary truths. One of Leibniz’s persistent criticisms of Locke’s empiricism is the thesis that Locke’s theory of knowledge provides no explanation of how we know of certain propositions that they are not only true, but necessarily true. Leibniz agreed that Locke offered no adequate account of how we know propositions to be true whose justification does not depend on or upon experience: Hence, that Locke had no acceptable account for a priori knowledge, however, Leibniz diagnosis of Locke’s failing was straightforward: Locke lacked an adequate account of our a prior knowledge because, on Locke’s theory must come from experience, thus overlooking what Leibniz took to be the source of our a prior knowledge namely, what is innate to the mind. In that, Leibniz argued for the second alternative, the theory of innate doctrines and concepts.

The thesis that some concepts are innate to the mind is crucial to Leibniz’s philosophy. He held that the most basic metaphysical concepts, e.g., the concepts of substances and causation, are innate, whereby, he was unmoved by the inability of empiricism to reconstruct full-blown versions of those concepts from the materials of sense experience.

Leibniz’s account of our knowledge of contingent truths is remarkably similar to what we would expect to find in an empiricist’s epistemology. Leibniz claimed that our knowledge of particular contingent truths has its basis in sense perception. He argued that our knowledge of universal contingent truths can not be based entirely on simple enumerative inductions, but must be supplemented by what he called ‘the conjectural method a priori’, which he described as follows:



The conjectural method a priori proceeds by hypotheses, assuming certain causes, perhaps, without proof and showing that the things that happen would follow from those that happen would follow from those assumptions. A hypothesis of this kind is like the key to a cryptograph, and the simpler it is, and the greater the number of events that can be explained by it, the more probable it is (Loemker, 1969).



Leibniz’s conception of the conjectural method a priori is a precursor of the hypothetico-deductive method. He placed emphasis on the need for a formal theory of probability, in order to formulate an adequate theory of our knowledge of contingent truths.

Leibniz sided with his rationalist colleagues, e.g., Descartes, in maintaining, contrary to the empiricist, that, since thought is an essential property of the mind, there is no time at which a mind exists without a thought, a perception. But Leibniz insisted on a distinction between having a perception and being aware of it. He argued forcefully on both empirical grounds and conceptual grounds that finite minds have numerous perceptions of which they are not aware of the time at which they have them (Remnant and Bennett, 1981).

Leibniz’s rationalism in epistemology is most evident in his account of our a priori knowledge, that is, according to (3), our knowledge of necessary truths. One of Leibniz’s persistent criticisms of Locke’s empiricism is the thesis that Locke’s theory of knowledge provides no explanation of how we know of certain propositions that they are not only true, but necessarily true. Leibniz argued that Locke offered no adequate account of how we know propositions to be true whose justification does not depend upon experience: Hence, that Locke had no acceptable account of our a priori knowledge. Leibniz’s diagnosis of Locke failing was straightforward: Locke lacked an adequate account of our a priori knowledge because, on Locke’s theory, all the material for the justification of beliefs must come from experience, thus overlooking what Leibniz took to be the source of our a priori knowledge, namely, what is innate to the mind. Leibniz summarized his dispute with Locke thus:



Our differences are on matters of some importance. It is a matter of knowing if the soul in itself is entirely empty like a writing tablet on which nothing has as yet been written . . . And if everything inscribed there comes solely from the senses and experience, or if the soul contains originally the sources of various concepts and doctrines that external objects merely reveal on occasion . . . (Remnant and Bennett, 1981).



Leibniz argued for the second alternative, the theory of innate doctrines and concepts. Th e thesis that some concepts are innate to the mind is crucial to Leibniz’s philosophy. he held that the most basic metaphysical concepts, e.g.,, the concepts of substance and causation, are innate. Hence, he was unmoved by the inability of empiricist to reconstruct full-blown versions of those concepts from the materials of sense experience.

These in innate ideas, have been variously defined by philosophers either as ideas consciously present to the mind prior to sense experience (the non-dispositional sense), or as ideas which we have an innate disposition top form (though we need not be actually aware of them at any particular time, e.g., as babies)-the dispositional sense.

Understood in either way they were invoked to account for our recourse tn experiential verification, such as those of mathematics, or to justify versions of moral and religious claims which were held to be capable of being known by introspection of our innate ideas, examples of such supposed truths might include ‘murder is wrong’ or that ‘God exists’.

One difficulty with the doctrine is that is sometimes formulated as one about concepts or idea s which are held to be innate and at other times as one about a source of propositional knowledge. in so far as concepts are taken to be innate the doctrine relates primarily to claim about meaning, our idea of God, for example, is taken as a source for the meaning of the word God. When innate ideas are understood propositionally their supposed innateness is taken as evidence for their truth. This latter thesis clearly rests on the assumption that innate propositions have an unimpeachable source, usually taken to be God, but then any appeal to innate ideas to justify the existence of God is circular. Despite such difficulties the doctrine of innate ideas has a long and influential history or until the eighteenth century and the concept has in employment in Noam Chomsky‘s influential account of the mind’s linguistic capacities.

The attraction of the theory has been felt strongly by those philosophers who have been unable to give an alternative account of our capacity to recognize that some propositions are certainly true where that recognition cannot be justified solely on the basis of an appeal to sense experience. Thus, Plato argued that, for example, recognition of mathematical truths could only be explained on the assumption by some assumption of some form of recollection. ‘Recollection’, or anamnesis has several roles in Plato’s epistemology. In the Meno, it is invoked to explain the behaviour of an uneducated boy who answers a geometric problem that he has never heard. At the same time, it is used to solve a paradox about inquiry and learning. In the Phaedo, it is said too explain our possession of concepts, construed as knowledge of Forms, which we supposedly could not have gained from experience. Recollection also appears in the Phaedrus, but is notably absent from important presentations of Plato’s epistemological views in the Republic and other works. Since there was no plausible post-natal source the recollection must refer back to a pre-natal acquisition of knowledge. Thus understood, the doctrine of innate ideas supported the view that there were important truths innate in human beings and it was the sense which hindered the proper apprehension.

The ascetic implications of the doctrine were important in Christian philosophy throughout the Middle Ages and the doctrine featured powerfully in scholastic teaching until its displacement by Locke’s philosophy in the eighteenth century. It had in the meantime acquired modern expression in the philosophy of Descartes who argued that we can come to know certain important truths before we have an empirical knowledge at all. Our idea of God, for example, and our coming to recognize that God must necessarily exit, as Descartes held, logically independent of sense experience. In England the Cambridge Platonists such as Henry More and Ralph Cudworth added considerable support.

Locke’s rejection of innate ideas and his alternative empiricist account was powerful enough to displace the doctrine from philosophy almost totally. Leibniz, in his critique of Locke, attempted to defend in with a sophisticated dispositional version of the theory , but it attracted few followers.

The empiricist alternative to innate ideas as an explanation of the certainty of propositions was in the direction of construing all necessary truths as analytic. Kant’s refinement of the classification of propositions with the fourfold distinction, analytic/synthetic and a priori/ a posteriori did nothing to encourage a return to the innate ideas doctrine, which slipped from view. The doctrine may fruitfully be understood as the production of confusion between explaining the genesis of ideas or concepts and the basis for regarding some propositions as necessarily true.

Chomsky’s revival of the term in connection with his account of human speech acquisition has once more made the issue topical. He claims that the principles of language and ‘natural logic’ are known unconsciously and are a precondition for language acquisition. But for his purposes innate ideas must be taken in a strongly dispositional sense - so strong that it is far from clear that Chomsky’s claim’s are in conflict with empiricist’ account as some (including Chomsky) have supposed. Quine, for example, sees no clash with his own version of empirical behaviourism, in which old talk of ideas is eschewed in favour of dispositions to observable behaviour.

Until very recently it could have been that most approaches to the philosophy of science were ‘cognitive’. This includes ‘logical positivism’, as nearly all of those who wrote about the nature of science would have been in agreement that science ought to be ‘value-free’. This had been a particular emphasis on the part of the first positivist, as it would be upon twentieth-century successors. Science, so it is said, deals with ‘facts’, and facts and values and irreducibly distinct. Facts are objective, they are what we seek in our knowledge of the world. Values are subjective: They bear the mark of human interest, they are the radically individual products of feeling and desire. Fact and value cannot, therefore, be inferred from fact, fact ought not be influenced by value. There were philosophers, notably some in the Kantian tradition, who viewed the relation of the human individual to the universalist aspiration of difference rather differently. But the legacy of three centuries of largely empiricist reflection of the ‘new’ sciences ushered in by Galilee Galileo (1564-1642), the Italian scientist whose distinction belongs to the history of physics and astronomy, rather than natural philosophy.

The philosophical importance of Galileo’s science rests largely upon the following closely related achievements: (1) His stunning successful arguments against Aristotelean science, (2) his proofs that mathematics is applicable to the real world. (3) His conceptually powerful use of experiments, both actual and employed regulatively, (4) His treatment of causality, replacing appeal to hypothesized natural ends with a quest for efficient causes, and (5) his unwavering confidence in the new style of theorizing that would come to be known as ‘mechanical explanation’.

A century later, the maxim that scientific knowledge is ‘value-laded’ seems almost as entrenched as its opposite was earlier. It is supposed that between fact and value has been breached, and philosophers of science seem quite at home with the thought that science and value may be closely intertwined after all. What has happened to bring about such an apparently radical change? What are its implications for the objectivity of science, the prized characteristic that, from Plato’s time onwards, has been assumed to set off real knowledge (epistēmē) from mere opinion (doxa)? To answer these questions adequately, one would first have to know something of the reasons behind the decline of logical positivism, as, well as of the diversity of the philosophies of science that have succeeded it.

More general, the interdisciplinary field of cognitive science is burgeoning on several fronts. Contemporary philosophical re-election about the mind - which has been quite intensive - has been influenced by this empirical inquiry, to the extent that the boundary lines between them are blurred in places.

Nonetheless, the philosophy of mind at its core remains a branch of metaphysics, traditionally conceived. Philosophers continue to debate foundational issues in terms not radically different from those in vogue in previous eras. Many issues in the metaphysics of science hinge on the notion of ‘causation’. This notion is as important in science as it is in everyday thinking, and much scientific theorizing is concerned specifically to identify the ‘causes’ of various phenomena. However, there is little philosophical agreement on what it is to say that one event is the cause of some other.

Modern discussion of causation starts with the Scottish philosopher, historian, and essayist David Hume (1711-76),who argued that causation is simply a matter for which he denies that we have innate ideas, that the causal relation is observably anything other than ‘constant conjunction’, that there are observable necessary connections anywhere, and that there is either an empirical or demonstrative proof for the assumptions that the future will resemble the past, and that every event has a cause. That is to say, that there is an irresolvable dispute between advocates of free-will and determinism, that extreme scepticism is coherent and that we can find the experiential source of our ideas of self-substance or God.

According to Hume (1978) on event causes another if only if events of the type to which the first event belongs regularly occur in conjunctive events of the type to which the second event belongs. The formulation, however, leaves a number of questions open. Firstly, there is a problem of distinguishing genuine ‘causal law’ from ‘accidental regularities’. Not all regularities are sufficient law-like to underpin causal relationships. Being that there is a screw in my desk could well be constantly conjoined with being made of copper, without its being true that these screws are made of copper because they are in my desk. Secondly, the idea of constant conjunction does not give a ‘direction’ to causation. Causes need to be distinguished from effects. But knowing that A-type events are constantly conjoined with B-type events does not tell us which of ‘A’ and ‘B’ is the cause and which the effect, since constant conjunction is itself a symmetric relation. Thirdly, there is a problem about ‘probabilistic causation’. When we say that causes and effects are constantly conjoined, do we mean that the effects are always found with the causes, or is it enough that the causes make the effect probable?

Many philosophers of science during the past century have preferred to talk about ‘explanation’ than causation. According to the covering-law model of explanation, something is explained if it can be deduced from premises which include one or more laws. As applied to the explanation of particular events this implies that one particular event can be explained if it is linked by a law to some other particular event. However, while they are often treated as separate theories, the covering-law account of explanation is at bottom little more than a variant of Hume’s constant conjunction account of causation. This affinity shows up in the fact at the covering-law account faces essentially the same difficulties as Hume: (1) In appealing to deduction from ‘laws’, it needs to explain the difference between genuine laws and accidentally true regularities: (2) Its omission by effects, as well as effects by causes, after all, it is as easy to deduce the height of flag-pole from the length of its shadow and the law of optics: (3) Are the laws invoked in explanation required to be exceptionalness and deterministic, or is it an acceptable say, to appeal to the merely probabilistic fact that smoking makes cancer more likely, in explaining why some particular person develops cancer?

Nevertheless, one of the centrally obtainable achievements for which the philosophy of science is to provide explicit and systematic accounts of the theories and explanatory strategies exploi0trated in the science. Another common goal is to construct philosophically illuminating analyses or explanations of central theoretical concepts invoked in one or another science. In the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and there has been a great deal of work on the structure of evolutionary theory and on such crucial concepts as fitness and biological function. By introducing ‘teleological considerations’, this account views beliefs as states with biological purpose and analyses their truth conditions specifically as those conditions that they are biologically supposed to covary with.

A teleological theory of representation needs to be supplemental with a philosophical account of biological representation, generally a selectionism account of biological purpose, according to which item ‘F’ has purpose ‘G’ if and only if it is now present as a result of past selection by some process which favoured items with ‘G’. So, a given belief type will have the purpose of covarying with ‘P’, say. If and only if some mechanism has selected it because it has covaried with ‘P’ the past.

Along the same lines, teleological theory holds that ‘r’ represents ‘x’ if it is r’s function to indicate (i.e., covary with) ‘x’, teleological theories take issue depending on the theory of functions they import. Perhaps the most important distinction is that between historical theories of functions and a-historical theories. Historical theories individuate functional states (hence, contents) in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was ‘learned’, or the way it evolved. An historical theory might hold that the function of ‘r’ is to indicate ‘x’ only if the capacity to token ‘r’ was developed (selected, learned) because it indicates ‘x’. thus, a state physically indistinguishable from ‘r’ (physical states being a-historical) but lacking r’s historical origins would not represent ‘x’ according to historical theories.

The American philosopher of mind (1935-) Jerry Alan Fodor, is known for a resolute ‘realism’ about the nature of mental functioning, taking the analogy between thought and computation seriously. Fodor believes that mental representations should be conceived as individual states with their own identities and structures, like formulae transformed by processes of computation or thought. His views are frequently contrasted with those of ‘holist s’ such as the American philosopher Herbert Donald Davidson (1917-2003), or ‘instrumentalists’ about mental ascription, such as the British philosopher of logic and language, Eardley Anthony Michael Dummett, 1925- In recent years he has become a vocal critic of some of the aspirations of cognitive science.

Nonetheless, a suggestion extrapolating the solution of teleology is continually queried by points as owing to ‘causation’ and ‘content’, and ultimately a fundamental appreciation is to be considered, is that: We suppose that there’s a causal path from A’s to ‘A’s’ and a causal path from B’s to ‘A’s’, and our problem is to find some difference between B-caused ‘A’s’ and A-caused ‘A’s’ in virtue of which the former but not the latter misrepresented. Perhaps, the two paths differ in their counter-factual properties. In particular, in spite of the fact that although A’s and B’s botheration gives cause by A’s’ every bit as a matter of fact, perhaps can assume that only A’s would cause ‘A’s’ in - as one can say -, ‘optimal circumstances’. We could then hold that a symbol expresses its ‘optimal property’, viz., the property that would causally control its tokening in optimal circumstances. Correspondingly, when the tokening of a symbol is causally controlled by properties other than its optimal property, the tokens that eventuate are ipso facto wild.

Suppose at the present time, that this story about ‘optimal circumstances’ is proposed as part of a naturalized semantics for mental representations. In which case it is, of course, essential that it be possible to say that the optimal circumstances for tokening a mental representation are in terms that are not themselves either semantical nor intentional. (It would not do, for example, to identify the optimal circumstances for tokening a symbol as those in which the tokens are true, that would be to assume precisely the sort of semantical notions that the theory is supposed to naturalize.) Befittingly, the suggestion - to put it in a nutshell - is that appeals to ‘optimality’ should be buttressed by appeals to ‘teleology’: Optimal circumstances are the ones in which the mechanisms that mediate symbol tokening are functioning ‘as they are supposed to’. In the case of mental representations, these would be paradigmatically circumstances where the mechanisms of belief fixation are functioning as they are supposed to.

So, then: The teleology of the cognitive mechanisms determine the optimal condition for belief fixation, and the optimal condition for belief fixation determines the content of beliefs. So the story goes.

To put this objection in slightly other words: The teleology story perhaps strikes one as plausible in that it understands one normative notion - truth - in terms of another normative notion - optimality. But this appearance if it is spurious there is no guarantee that the kind of optimality that teleology reconstructs has much to do with the kind of optimality that the explication of ‘truth’ requires. When mechanisms of repression are working ‘optimally’ - when they’re working ‘as they’re supposed to’ - what they deliver are likely to be ‘falsehoods’.

Once, again, there’s no obvious reason why coitions that are optimal for the tokening of one sort of mental symbol need be optimal for the tokening of other sorts. Perhaps the optimal conditions for fixing beliefs about very large objects, are different from the optimal conditions for fixing beliefs about very small ones, are different from the optimal conditions for fixing beliefs sights. But this raises the possibility that if we’re to say which conditions are optimal for the fixation of a belief, we’ll have to know what the content of the belief is - what it’s a belief about. Our explication of content would then require a notion of optimality, whose explication in turn requires a notion of content, and the resulting pile would clearly be unstable.

Teleological theories hold that ‘r’ represents ‘x’ if it is r’s function to indicate (i.e., covary with) ‘x’. Teleological theories differ, depending on the theory of functions they import. Perhaps the most important distinction is that between historical theories of functions: Historically, theories individuate functional states (hence, contents) in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was ‘learned’, or the way it evolved. An historical theory might hold that the function of ‘r’ is to indicates ‘x’ only if the capacity to token ‘r’ was developed (selected, learned) because it indicates ‘x’. Thus, a state physically indistinguishable from ‘r’ (physical states being a-historical), but lacking r’s historical origins would not represent ‘x’ according to historical theories.

Just as functional role theories hold that r’s representing ‘x’ is grounded in the functional role ‘r’ has in the representing system, i.e., on the relations imposed by specified cognitive processes between ‘r’ and other representations in the system’s repertoire. Functional role theories take their cue from such common-sense ideas as that people cannot believe that cats are furry if they do not know that cats are animals or that fur is like hair.

That being said, that nowhere is the new period of collaboration between philosophy and other disciplines more evident than in the new subject of cognitive science. Cognitive science from its very beginning has been ‘interdisciplinary’ in character, and is in effect the joint property of psychology, linguistics, philosophy, computer science and anthropology. There is, therefore, a great variety of different research projects within cognitive science, but the central area of cognitive science, its hard-coded ideology rests on the assumption that the mind is best viewed as analogous to a digital computer. The basic idea behind cognitive science is that recent developments in computer science and artificial intelligence have enormous importance for our conception of human beings. The basic inspiration for cognitive science went something like this: Human beings do information processing. Computers are designed precisely do information processing. Therefore, one way to study human cognition - perhaps the best way to study it - is to study it as a matter of computational information processing. Some cognitive scientists think that the computer is just a metaphor for the human mind: Others think that the mind is literally a computer program. But it is fair to say, that without the computational model there would not have been a cognitive science as we now understand it.

In, Essay Concerning Human Understanding is the first modern systematic presentation of empiricist epistemology, and as such had important implications for the natural sciences and for philosophy of science generally. Like his predecessor, Descartes, the English philosopher (1632-1704) John Locke began his account of knowledge from the conscious mind aware of ideas. Unlike Descartes, however, he was concerned not to build a system based on certainty, but to identify the mind’s scope and limits. The premise upon which Locke built his account, including his account of the natural sciences, is that the ideas which furnish the mind are all derived from experience. He thus, totally rejected any kind of innate knowledge. In this he consciously opposing Descartes, who had argued that it is possible to come to knowledge of fundamental truths about the natural world through reason alone. Descartes (1596-1650) had argued, that we can come to know the essential nature of both the ‘mind’ and ‘matter’ by pure reason. John Locke accepted Descartes’s criterion of clear and distinct ideas as the basis for knowledge, but denied any source for them other than experience. It was information that came in via the five senses (ideas of sensation) and ideas engendered from pure inner experiences (ideas of reflection) came the building blocks of the understanding.

Locke combined his commitment to ‘the new way of ideas’ with the native espousal of the ‘corpuscular philosophy’ of the Irish scientist (1627-92) Robert Boyle. This, in essence, was an acceptance of a revised, more sophisticated account of matter and its properties that had been advocated by the ancient atomists and recently supported by Galileo (1564-1642) and Pierre Gassendi (1592-1655). Boyle argued from theory and experiment that there were powerful reasons to justify some kind of corpuscular account of matter and its properties. He called the latter qualities, which he distinguished as primary and secondary - the distinction between primary and secondary qualities may be reached by two rather different routes: Either from the nature or essence of matter or from the nature and essence of experience, though practising these have tended to run together. The former considerations make the distinction seem like an a priori, or necessary, truth about the nature of matter, while the latter make it appears to be an empirical hypothesis -. Locke, too, accepted this account, arguing that the ideas we have of the primary qualities of bodies resemble those qualities as they are in the subject, whereas the ideas of the secondary qualities, such as colour, taste, and smell, do not resemble their causes in the object.

There is no strong connection between acceptance of the primary-secondary quality distinction and Locke’s empiricism and Descartes had also argued strongly for universal acceptance by natural philosophers, and Locke embraced it within his more comprehensive empirical philosophy. But Locke’ empiricism did have major implications for the natural sciences, as he well realized. His account begins with an analysis of experience. All ideas, he argues, are either simple or complex. Simple ideas are those like the red of a particular rose or the roundness of a snowball. Complicated and complex ideas, our ideas of the rose or the snowball, are combinations of simple ideas. We may create new complicated and complex ideas in our imagination - a parallelogram, for example. But simple ideas can never be created by us: We just have them or not, and characteristically they are caused, for example, the impact on our senses of rays of light or vibrations of sound in the air coming from a particular physical object. Since we cannot create simple ideas, and they are determined by our experience. Our knowledge is in a very strict uncompromising way limited. Besides, our experiences are always of the particular, never of the general. It is this particular simple idea or that particular complex idea that we apprehend. We never in that sense apprehend a universal truth about the natural world, but only particular instances. It follows from this that all claims to generality about that world - for example, all claims to identity what were then beginning to be called the laws of nature - must to that extent go beyond our experience and thus be less than certain.

The Scottish philosopher, historian, and essayist, (1711-76) David Hume, whose famous discussion appears in both his major philosophical works, the ‘Treatise’ (1739) and the ‘Enquiry’(1777). The distinction is couched in terms of the concept of causality, so that where we are accustomed to talk of laws, Hume contends, involves three ideas:

1. That there should be a regular concomitance between

events of the type of the cause and those of the type

of the effect.

2. That the cause event should be contiguous with the

effect event.

3. That the cause event should necessitate the effect event.

The tenets (1) and (2) occasion no differently for Hume, since he believes that there are patterns of sensory impressions un-problematically related to the idea of regularity concomitance and of contiguity. But the third requirement is deeply problematic, in that the idea of necessarily that figures in it seems to have no sensory impression correlated with it. However, carefully and attentively we scrutinize a causal process, we do not seem to observe anything that might be the observed correlates of the idea of necessity. We do not observe any kind of activity, power, or necessitation. All we ever observe is one event following another, which is logically independent of it. Nor is this necessity logical, since, as, Hume observes, one can jointly assert the existence of the cause and a denial of the existence of the effect, as specified in the causal statement or the law of nature, without contradiction. What, then, are we to make of the seemingly central notion of necessity that is deeply embedded in the very idea of causation, or lawfulness? To this query, Hume gives an ingenious and telling story. There is an impression corresponding to the idea of causal necessity, but it is a psychological phenomenon: Our exception that an even similar to those we have already observed to be correlated with the cause-type of events will come to be in this case too. Where does that impression come from? It is created as a kind of mental habit by the repeated experience of regular concomitance between events of the type of the effect and the occurring of events of the type of the cause. And then, the impression that corresponds to the idea of regular concomitance - the law of nature then asserts nothing but the existence of the regular concomitance.

At this point in our narrative, the question at once arises as to whether this factor of life in nature, thus interpreted, corresponds to anything that we observe in nature. All philosophy is an endeavour to obtain a self-consistent understanding of things observed. Thus, its development is guided in two ways, one is demand for coherent self-consistency, and the other is the elucidation of things observed. With our direct observations how are we to conduct such comparisons? Should we turn to science? No. There is no way in which the scientific endeavour can detect the aliveness of things: Its methodology rules out the possibility of such a finding. On this point, the English mathematician and philosopher (1861-1947) Alfred Whitehead, comments: That science can find no individual enjoyment in nature, as science can find no creativity in nature, it finds mere rules of succession. These negations are true of natural science. They are inherent in its methodology. The reason for this blindness of physical science lies in the fact that such science only deals with half the evidence provided by human experience. It divides the seamless coat - or, to change the metaphor into a happier form, it examines the coat, which is superficial, and neglects the body which is fundamental.

Whitehead claims that the methodology of science makes it blind to a fundamental aspect of reality, namely, the primacy of experience, it neglected half of the evidence. Working within Descartes’ dualistic framework reference, of matter and mind as separate and incommensurate, science limits itself to the study of objectivised phenomena, neglecting the subject and the mental events that are his or her experience.

Both the adoption of the Cartesian paradigm and the neglect of mental events are reason enough to suspect ‘blindness’, but there is no need to rely on suspicions. This blindness is clearly evident. Scientific discoveries, impressive as they are, are fundamentally superficial. Science can express regularities observed in nature, but it cannot explain the reasons for their occurrence. Consider, for example, Newton’s law of gravity. It shows that such apparently disparate phenomena as the falling of an apple and the revolution of the earth around the sun are aspects of the same regularity - gravity. According to this law the gravitational attraction between two objects deceases in proportion to the square of the distance between them. Why is that so? Newton could not provide an answer. Simpler still, why does celestial or supernal space have three dimensions? Why is time one-dimensional? Whitehead notes, ‘None of these laws of nature gives the slightest evidence of necessity. They are [merely] the modes of procedure which within the scale of observation do in fact prevail’.

This analysis reveals that the capacity of science to fathom the depths of reality is limited. For example, if reality is, in fact, made up of discrete units, and these units have the fundamental character in being ‘ the pulsing throbs of experience’, then science may be in a position to discover the discreteness: But it has no access to the subjective side of nature since, as the Austrian physicist(1887-1961) Erin Schrödinger points out, we ‘exclude the subject of cognizance from the domain of nature that we endeavour to understand’. It follows that in order to find ‘the elucidation of things observed’ in relation to the experiential or aliveness aspect, we cannot rely on science, we need to look elsewhere.

If, instead of relying on science, we rely on our immediate observation of nature and of ourselves, we find, first, that this [i.e., Descartes’] stark division between mentality and nature has no ground in our fundamental observation. We find ourselves living within nature. Secondly, in that we should conceive mental operations as among the factors which make up the constitution of nature, and thirdly, that we should reject the notion of idle wheels in the process of nature. Every factor which makes a difference, and that difference can only be expressed in terms of the individual character of that factor.

Whitehead proceeds to analyse our experiences in general, and our observations of nature in particular, and ends up with ‘mutual immanence’ as a central theme. This mutual immanence is obvious in the case of an experience, that, I am a part of the universe, and, since I experience the universe, the experienced universe is part of me. Whitehead gives an example, ‘I am in the room, and the room is an item in my present experience. But my present experience is what I am now’. A generalization of this relationship to the case of any actual occasions yields the conclusion that ‘the world is included within the occasion in one sense, and the occasion is included in the world in another sense’. The idea that each actual occasion appropriates its universe follows naturally from such considerations.

The description of an actual entity as being a distinct unit is, therefore, only one part of the story. The other, complementary part is this: The very nature of each and every actual entity is one of interdependence with all the other actual entities in the universe. Each and every effective entity ascertained by some outward appearance of something as distinguished from the substance for which it is made a process of prehending or appropriating all the other actual entities and creating one new entity out of them all, namely, itself.

There are two general strategies for distinguishing laws from accidentally true generalizations. The first stands by Hume’s idea that causal connections are mere constant conjunctions, and then seeks to explain why some constant conjunctions are better than others. That is, this first strategy accepts the principle that causation involves nothing more than certain events always happening together with certain others, and then seeks to explain why some such patterns - the ‘laws’ - matter more than others - the ‘accidents’ -. The second strategy, by contrast, rejects the Humean presupposition that causation involves nothing more than is responsible for an effect to happen in reserve to the chance-stantial co-occurrence, and instead postulates the relationship ‘necessitation’, a kind of ‘cement, which links events that are connected by law, but not those events (like having a screw in my desk and being made of copper) that are only accidentally conjoined.

There are a number of versions of the first Human strategy. The most successful, originally proposed by the Cambridge mathematician and philosopher F.P. Ramsey (1903-30), and later revived by the American philosopher David Lewis (1941-2002), who holds that laws are those true generalizations that can be fitted into an ideal system of knowledge. The thought is, that, the laws are those patterns that are somewhat explicated in terms of basic science, either as fundamental principles themselves, or as consequences of those principles, while accidents, although true, have no such explanation. Thus, ‘All water at standard pressure boils at 1000 C’ is a consequence of the laws governing molecular bonding: But the fact that ‘All the screws in my desk are copper’ is not part of the deductive structure of any satisfactory science. Frank Plumpton Ramsey (1903-30), neatly encapsulated this idea by saying that laws are ‘consequences of those propositions which we should take as axioms if we knew everything and organized it as simply as possible in a deductive system’.

Advocates of the alternative non-Humean strategy object that the difference between laws and accidents is not a ‘linguistic’ matter of deductive systematization, but rather a ‘metaphysical’ contrast between the kind of links they report. They argue that there is a link in nature between being at 1000 C and boiling, but not between being ‘in my desk’ and being ‘made of copper’, and that this is nothing to do with how the description of this link may fit into theories. According to the forth-right Australian D.M. Armstrong (1983), the most prominent defender of this view, the real difference between laws and accidentals, is simply that laws report relationships of natural ‘necessitation’, while accidents only report that two types of events happen to occur together.

Armstrong’s view may seem intuitively plausible, but it is arguable that the notion of necessitation simply restates the problem, than solving it. Armstrong says that necessitation involves something more than constant conjunction: If two events e related by necessitation, then it follows that they are constantly conjoined, but two events can be constantly conjoined without being related by necessitation, as when the constant conjunction is just a matter of accident. So necessitation is a stronger relationship than constant conjunction. However, Armstrong and other defenders of this view say very little about what this extra strength amounts to, except that it distinguishes laws from accidents. Armstrong’s critics argue that a satisfactory account of laws ought to cast more light than this on the nature of laws.

Hume said that the earlier of two causally related events is always the cause, and the later effect. However, there are a number of objections to using the earlier-later ‘arow of time’ to analyse the directional ‘arrow of causation’. For a start, it seems in principle, possible that some causes and effects could be simultaneous. That more, in the idea that time is directed from ‘earlier’ too ‘later’ itself stands in need of philosophical explanation - and one of the most popular explanations is that the idea of ‘movement’ from earlier to later depends on the fact that cause-effect pairs always have a time, and explain ‘earlier’ as the direction in which causes lie, and ‘later’ as the direction of effects, that we will clearly need to find some account of the direction of causation which does not itself assume the direction of time.

A number of such accounts have been proposed. David Lewis (1979) has argued that the asymmetry of causation derives from an ‘asymmetry of over-determination’. The over-determination of present events by past events - consider a person who dies after simultaneously being shot and struck by lightning - is a very rare occurrence, by contrast, the multiple ‘over-determination’ of present events by future events is absolutely normal. This is because the future, unlike the past, will always contain multiple traces of any present event. To use Lewis’s example, when the president presses the red button in the White House, the future effects do not only include the dispatch of nuclear missiles, but also the fingerprint on the button, his trembling, the further depletion of his gin bottle, the recording of the button’s click on tape, he emission of light waves bearing the image of his action through the window, the warnings of the wave from the passage often signal current, and so on, and so on, and on.

The American philosopher David Lewis (1941-2002), relates this asymmetry of over-determination to the asymmetry of causation as follows. If we suppose the cause of a given effect to have been absent, then this implies the effect would have been absent too, since (apart from freak -like occurrence in the lightning-shooting case) there will not be any other causes left to ‘fix’ the effect. By contrast, if we suppose a given effect of some cause to have been absent, this does not imply the cause would have been absent, for there are still all the other traces left to ‘fix’ the causes. Lewis argues that these counterfactual considerations suffice to show why causes are different from effects.

Other philosophers appeal to a probabilistic variant of Lewis’s asymmetry. Following, the philosopher of science and probability theorists, Hans Reichenbach (1891-1953), they note that the different causes of any given type of effect are normally probabilistically independent of each other, by contrast, the different effects of any given type of cause are normally probabilistically correlated. For example, both obesity and high excitement can cause heart attacks, but this does not imply that fat people are more likely to get excited than thin ones: Its facts, that both lung cancer and nicotine-stained fingers can result from smoking does imply that lung cancer is more likely among people with nicotine-stained fingers. So this account distinguishes effects from causes by the fact that the former, but not the latter are probabilistically dependent on each other.

However, there is another course of thought in philosophy of science, the tradition of ‘negative’ or ‘eliminative’ induction. From the English statesman and philosopher Francis Bacon (1561-1626) and in modern time the philosopher of science Karl Raimund Popper (1902-1994), we have the idea of using logic to bring falsifying evidence to bear on hypotheses about what must universally be the case that many thinkers accept in essence his solution to the problem of demarcating proper science from its imitators, namely that the former results in genuinely falsifiable theories whereas the latter do not. Although falsely allowed many people’s objections to such ideologies as psychoanalysis and Marxism.

Hume was interested in the processes by which we acquire knowledge: The processes of perceiving and thinking, of feeling and reasoning. He recognized that much of what we claim to know derives from other people secondhand, thirdhand or worse: Moreover, our perceptions and judgements can be distorted by many factors - by what we are studying, as well as by the very act of study itself, the main reason, however, behind his emphasis on ‘probabilities and those other measures of evidence on which life and action entirely depend’ is this: It is apparent, for by and large, that complete understanding concerning the validity of ‘matter of fact’, are founded on the relation of cause and effect, and that we can never infer the existence of one object from another unless they are connected together, either mediately or immediately.

When we apparently observe a whole sequence, say of one ball hitting another, what exactly do we observe? And in the much commoner cases, when we wonder about the unobserved causes or effects of the events we observe, what precisely are we doing?

Hume recognized that a notion of ‘must’ or necessity is a peculiar feature of causal relation, inference and principles, and challenges us to explain and justify the notion. He argued that there is no observable feature of events, nothing like a physical bond, which can be properly labelled the ‘necessary connection’ between a given cause and its effect: Events simply are, they merely occur, and there is in ‘must’ or ‘ought’ about them. However, repeated experience of pairs of events sets up the habit of expectation in us, such that when one of the pair occurs we inescapably expect the other. This expectation makes us infer the unobserved cause or unobserved effect of the observed event, and we mistakenly project this mental inference on to the events themselves. There is no necessity observable in causal relations, all that can be observed is regular sequence, here is necessity in causal inferences, but only in the mind. Once we realize that causation is a relation between pairs of events. We also realize that often we are not present for the whole sequence e which we want to divide into ‘cause’ and ‘effect’. Our understanding of the casual relation is thus intimately linked with the role of the causal inference cause only causal inferences entitle us to ‘go beyond what is immediately present to the senses’. But now two very important assumptions emerge behind the causal inference: The assumptions that like causes, in ‘like circumstances, will always produce like effects’, and the assumption that ‘the course of nature will continue uniformly the same’ - or, briefly that the future will resemble the past. Unfortunately, this last assumption lacks either empirical or a priori proof, that is, it can be conclusively established neither by experience nor by thought alone.

Hume frequently endorsed a standard seventeenth-century view that all our ideas are ultimately traceable, by analysis, to sensory impressions of an internal or external kind. Accordingly, he claimed that all his theses are based on ‘experience’, understood as sensory awareness together with memory, since only experience establishes matters of fact. But is our belief that the future will resemble the past properly construed as a belief concerning only a mater of fact? As the English philosopher Bertrand Russell (1872-1970) remarked, earlier this century, the real problem that Hume raises are whether future futures will resemble future pasts, in the way that past futures really did resemble past pasts. Hume declares that ‘if . . . the past may be no rule for the future, all experience become useless and can give rise to inference or conclusion. And yet, he held, the supposition cannot stem from innate ideas, since there are no innate ideas in his view nor can it stem from any abstract formal reasoning. For one thing, the future can surprise us, and no formal reasoning seems able to embrace such contingencies: For another, even animals and unthinkable people conduct their lives as if they assume the future resembles the past: Dogs return for buried bones, children avoid a painful fire, and so forth. Hume is not deploring the fact that we have to conduct our lives on the basis of probabilities, and he is not saying that inductive reasoning could or should be avoided or rejected. Rather, he accepted inductive reasoning but tried to show that whereas formal reasoning of the kind associated with mathematics cannot establish or prove matters of fact, factual or inductive reasoning lacks the ‘necessity’ and ‘certainty’ associated with mathematics. His position, therefore clear; because ‘every effect is a distinct event from its cause’, only investigation can settle whether any two particular events are causally related: Causal inferences cannot be drawn with the force of logical necessity familiar to us from deductivity, but, although they lack such force, they should not be discarded. In the context of causation, inductive inferences are inescapable and invaluable. What, then, makes ‘past experience’ the standard of our future judgement? The answer is ‘custom’, it is a brute psychological fact, without which even animal life of a simple kind would be more or less impossible. ‘We are determined by custom to suppose the future conformable to the past’ (Hume, 1978), nevertheless, whenever we need to calculate likely events we must supplement and correct such custom by self-conscious reasoning.

Nonetheless, the problem that the causal theory of reference will fail once it is recognized that all representations must occur under some aspect or that the extentionality of causal relations is inadequate to capture the aspectual character of reference. The only kind of causation that could be adequate to the task of reference is intentional causal or mental causation, but the causal theory of reference cannot concede that ultimately reference is achieved by some met device, since the whole approach behind the causal theory was to try to eliminate the traditional mentalism of theories of reference and meaning in favour of objective causal relations in the world, though it is at present by far the most influential theory of reference, will prove to be a failure for these reasons.

If mental states are identical with physical states, presumably the relevant physical states are various sorts of neural states. Our concepts of mental states such as thinking, sensing, and feeling are of course, different from our concepts of neural states, of whatever sort. But that is no problem for the identity theory. As J.J.C. Smart (1962), who first argued for the identity theory, emphasized, the requisite identities do not depend on understanding concepts of mental states or the meanings of mental terms. For ‘a’ to be the identical with ‘b’, ‘a’, and ‘b’ must have exactly the same properties, but the terms ‘a’ and ‘b’ need not mean the same. Its principal means by measure can be accorded within the indiscernibility of identicals, in that, if ‘A’ is identical with ‘B’, then every property that ‘A’ has ‘B’, and vice versa. This is, sometimes known as Leibniz’ s Law.

But a problem does seem to arise about the properties of mental states. Suppose pain is identical with a certain firing of c-fibres. Although a particular pain is the very same as a neural-firing, we identify that state in two different ways: As a pain and as neural-firing. that the state will therefore have certain properties in virtue of which we identify it as pain and others in virtue of which we identify it as an excitability of neural firings. The properties in virtue of which we identify it as a pain will be mental properties, whereas those in virtue of which ewe identify it as neural excitability firing, will be physical properties. This has seemed to many to lead to a kind of dualism at the level of the properties of mental states, even if we reject dualism of substances and take people simply to be physical organisms, those organisms still have both mental and physical states. Similarly, even if we identify those mental states with certain physical states, those states will, nonetheless have both mental and physical properties. So disallowing dualism with respect to substances and their states simply are to its reappearance at the level of the properties of those states.

There are two broad categories of mental property. Mental states such as thoughts and desires, often called ‘propositional attitudes’, have ‘content’ that can be de scribed by ‘that’ clauses. For example, one can have a thought, or desire, that it will rain. These states are said to have intentional properties, or ‘intentionality sensations’, such as pains and sense impressions, lack intentional content, and have instead qualitative properties of various sorts.

The problem about mental properties is widely thought to be most pressing for sensations, since the painful qualities of pains and the red quality of visual sensations seem to be irretrievably non-physical. And if mental states do actually have non-physical properties, the identity of mental states generate to physical states as they would not sustain a thoroughgoing mind-body materialism.

The Cartesian doctrine that the mental is in some way non-physical is so pervasive that even advocates of the identity theory sometimes accepted it, for the ideas that the mental is non-physical underlies, for example, the insistence by some identity theorists that mental properties are really neural as between being mental or physical. To be neural is in this way, a property would have to be neutral as to whether its mental at all. Only if one thought that being meant being non-physical would one hold that defending materialism required showing the ostensible mental properties are neutral as regards whether or not they’re mental.

But holding that mental properties are non-physical has a cost that is usually not noticed. A phenomenon is mental only if it has some distinctively mental property. So, strictly speaking, a materialist who claims that mental properties are non-physical phenomena exist. This is the ‘Eliminative-Materialist position advanced by the American philosopher and critic Richard Rorty (1979).

According to Rorty (1931-) ‘mental’ and ‘physical’ are incompatible terms. Nothing can be both mental and physical, so mental states cannot be identical with bodily states. Rorty traces this incompatibly to our views about incorrigibility: ‘Mental’ and ‘physical’ are incorrigible reports of one’s own mental states, but not reports of physical occurrences, but he also argues that we can imagine a people who describe themselves and each other using terms just like our mental vocabulary, except that those people do not take the reports made with that vocabulary to be incorrigible. Since Rorty takes a state to be a mental state only if one’s reports about it are taken to be incorrigible, his imaginary people do not ascribe mental states to themselves or each other. Nonetheless, the only difference between their language and ours is that we take as incorrigible certain reports which they do not. So their language as no less descriptive or explanatory power than ours. Rorty concludes that our mental vocabulary is idle, and that there are no distinctively mental phenomena.

This argument hinges on building incorrigibly into the meaning of the term ‘mental’. If we do not, the way is open to interpret Rorty’s imaginary people as simply having a different theory of mind from ours, on which reports of one’s own mental states are not incorrigible. Their reports would this be about mental states, as construed by their theory. Rorty’s thought experiment would then provide to conclude not that our terminology is idle, but only that this alternative theory of mental phenomena is correct. His thought experiment would thus sustain the non-eliminativist view that mental states are bodily states. Whether Rorty’s argument supports his eliminativist conclusion or the standard identity theory, therefore, depends solely on whether or not one holds that the mental is in some way non-physical.

Paul M. Churchlands (1981) advances a different argument for eliminative materialism. According to Churchlands, the common-sense concepts of mental states contained in our present folk psychology are, from a scientific point of view, radically defective. But we can expect that eventually a more sophisticated theoretical account will relace those folk-psychological concepts, showing that mental phenomena, as described by current folk psychology, do not exist. Since, that account would be integrated into the rest of science, we would have a thoroughgoing materialist treatment of all phenomena, unlike Rorty’s, does not rely of assuming that the mental is non-physical.

But even if current folk psychology is mistaken, that does not show that mental phenomena does not exist, but only that they are of the way folk psychology described them as being. We could conclude they do not exist only if the folk-psychological claims that turn out to be mistaken actually define what it is for some phenomena to be mental. Otherwise, the new theory would be about mental phenomena, and would help show that they’re identical with physical phenomena. Churchlands argument, like Rorty’s, depends on a special way of defining the mental, which we need not adopt, its likely that any argument for Eliminative materialism will require some such definition, without which the argument would instead support the identity theory.

Despite initial appearances, the distinctive properties of sensations are neutral as between being mental or physical, in that borrowed from the English philosopher and classicist Gilbert Ryle (1900-76), they are topic neutral: My having a sensation of red consists in my being in a state that is similar, in respect that we need not specify, even so, to something that occurs in me when I am in the presence of certain stimuli. Because the respect of similarity is not specified, the property is neither distinctively mental nor distinctively physical. But everything is similar to everything else in some respect or other. So leaving the respect of similarity unspecified makes this account too weak to capture the distinguishing properties of sensation.

A more sophisticated reply to the difficultly about mental properties is due independently to the Australian, David Malet Armstrong (1926-) and American philosopher David Lewis (1941-2002), who argued that for a state to be a particular sort of intentional state or sensation is for that state to bear characteristic causal relations to other particular occurrences. The properties in virtue of which e identify states as thoughts or sensations will still be neural as between being mental or physical, since anything can bear a causal relation to anything else. But causal connections have a better chance than similarity in some unspecified respect to capturing the distinguishing properties of sensations and thought.

This casual theory is appealing, but is misguided to attempt to construe the distinctive properties of mental states as being neutral as between being mental; or physical. To be neutral as regards being mental or physical is to be neither distinctively mental nor distinctively physical. But since thoughts and sensations are distinctively mental states, for a state to be a thought or a sensation is perforce for it to have some characteristically mental property. We inevitably lose the distinctively mental if we construe these properties as being neither mental nor physical.

Not only is the topic-neutral construal misguided: The problem it was designed to solve is equally so, only to say, that problem stemmed from the idea that mental must have some non-physical aspects. If not at the level of people or their mental states, then at the level of the distinctively mental properties of those states. However, it should be of mention, that properties can be more complicated, for example, in the sentence, ‘John is married to Mary’, we are attributing to John the property of being married, and unlike the property of John is bald. Consider the sentence: ‘John is bearded’. The word ‘John’ in this sentence is a bit of language - a name of some individual human being - and more some would be tempted to confuse the word with what it names. Consider the expression ‘is bald’, this too is a bit of language - philosopher call it a ‘predicate’ - and it brings to our attention some property or feature which, if the sentence is true,. Is possessed by John. Understood in this ay, a property is not its self linguist though it is expressed, or conveyed by something that is, namely a predicate. What might be said that a property is a real feature of the word, and that it should be contrasted just as sharply with any predicates we use to express it as the name ‘John’ is contrasted with the person himself. Controversially, just what sort of ontological status should be accorded to properties by describing ‘anomalous monism’, - while its conceivably given to a better understanding the similarity with the American philosopher Herbert Donald Davidson (1917-2003), wherefore he adopts a position that explicitly repudiates reductive physicalism, yet purports to be a version of materialism, nonetheless, Davidson holds that although token mental evident states are identical to those of physical events and states - mental ‘types’ - i.e., kinds, and/or properties - are neither to, nor nomically co-existensive with, physical types. In other words, his argument for this position relies largely on the contention that the correct assignment of mental a actionable properties to a person is always a holistic matter, involving a global, temporally diachronic, ‘intentional interpretation’ of the person. But as many philosophers have in effect pointed out, accommodating claims of materialism evidently requires more than just repercussions of mental/physical identities. Mentalistic explanation presupposes not merely that metal events are causes but also that they have causal/explanatory relevance as mental - i.e., relevance insofar as they fall under mental kinds or types. Nonetheless, Davidson’s position, which denies there are strict psychological or psychological laws, can accommodate the causal/explanation relevance of the mental quo mental: If to ‘epiphenomenalism’ with respect to mental properties.

But the idea that the mental is in some respect non-physical cannot be assumed without argument. Plainly, the distinctively mental properties of the mental states are unlikely any other properties we know about. Only mental states have properties that are at all like the qualitative properties that anything like the intentional properties of thoughts and desires. However, this does not show that the mental properties are not physical properties, not all physical properties like the standard states: So, mental properties might still be special kinds of physical properties. Its question beginning to assume otherwise. The doctrine that the mental properties is simply an expression of the Cartesian doctrine that the mental is automatically non-physical.

It is sometimes held that properties should count as physical properties only if they can be defined using the terms of physics. This to far to restrictively. Nobody would hold that to reduce biology to physics, for example, we must define all biological properties using only terms that occur in physics. And even putting ‘reduction’ aside, in certain biological properties could have been defined, that would not mean that those properties were in n way non-physical. The sense of ‘physical’ that is relevant, that is of its situation it must be broad enough to include not only biological properties, but also most common-sense, macroscopic properties. Bodily states are uncontroversially physical in the relevant way. So, we can recast the identity theory as asserting that mental states are identical with bodily state.

In the course of reaching conclusions about the origin and limits of knowledge, Locke had occasioned in concerning himself with topics which are of philosophical interest in themselves. On of these is the question of identity, which includes, more specifically, the question of personal identity: What are the criteria by which a person at one time is numerically the same person as a person encountering of time? Locke points out whether ‘this is what was here before, it matters what kind of thing ‘this’ is meant to be. If ‘this’ is meant as a mass of matter then it is what was before so long as it consists of the same material panicles, but if it is meant as a living body then its considering of the same particles does mot matter and the case is different. ‘A colt grown up to a horse, sometimes fat, sometimes lean, is all the while the same horse, though . . . there may be a manifest change of the parts. So, when we think about personal identity, we need to be clear about a distinction between two things which ‘the ordinary way of speaking runs together’ - the idea of ‘man’ and the idea of ‘person’. As with any other animal, the identity of a man consists ‘in nothing but a participation of the same continued life, by constantly fleeting particles of matter, in succession initially united to the same organized body, however, the idea of a person is not that of a living body of a certain kind. A person is a ‘thinking’. ‘intelligent being, that has son and reflection and such a being ‘will be the same self as far as the same consciousness can extend to action past or to come’ . Locke is at pains to argue that this continuity of self-consciousness does not necessarily involve the continuity of some immaterial substance, ion the way that Descartes had held, fo we all know, says Locke, consciousness and thought may be powers which can be possessed by ‘systems of matter fitly disposed’, and even if this is not so the question of the identity of person is not the same as the question of the identity of an ‘immaterial; substance’. For just as the identity of as horse can be preserved through changes of matter and depends not on the identity of a continued material substance of its unity of one continued life. So the identity of a person does not depend on the continuity of a immaterial substance. The unity of one continued consciousness does not depend on its being ‘annexed’ only to one individual substance, [and not] . . . continued in a succession of several substances. For Lock e, then, personal identity consists in an identity of consciousness, and not in the identity of some substance whose essence it is to be conscious

Casual mechanisms or connections of meaning will help to take a historical route, and focus on the terms in which analytical philosophers of mind began to discuss seriously psychoanalytic explanation. These were provided by the long-standing and presently unconcluded debate over cause and meaning in psychoanalysis.

It is not hard to see why psychoanalysis should be viewed in terms of cause and meaning. On the one hand, Freud’s theories introduce a panoply of concepts which appear to characterize mental processes as mechanical and non-meaningful. Included are Freud’s neurological model of the mind, as outlined in his ‘Project or a Scientific Psychology’, more broadly, his ‘economic’ description of the mental, as having properties of force or energy, e.g., as ‘cathexing’ objects: And his account in the mechanism of repression. So it would seem that psychoanalytic explanation employs terms logically at variance with those of ordinary, common-sens e psychology, where mechanisms do not play a central role. Bu t on the other hand, and equally striking, there is the fact that psychoanalysis proceeds through interpretation and engages on a relentless search for meaningful connections in mental life - something that even a superficial examination of the Interpretation of Dreams, or The Psychopathology of Everyday Life, cannot fail to impress upon one. Psychoanalytic interpretation adduces meaningful connections between disparate and often apparently dissociated mental and behavioural phenomena, directed by the goal of ‘thematic coherence’. Of giving mental life the sort of unity that we find in a work of art or cogent narrative. In this respect, psychoanalysis would seem to adopt as its central plank the most salient feature of ordinary psychology, its insistence e on relating actions to reason for them through contentual characterizations of each that make their connection seem rational, or intelligible: A goal that seems remote from anything found in he physical sciences.

The application to psychoanalysis of the perspective afforded by the cause-meaning debate can also be seen as a natural consequence of another factor, namely the semi-paradoxical nature of psychoanalysis’ explananda. With respect to all irrational phenomena, something like a paradox arises. Irrationality involves a failure of rational connectedness and hence of meaningfulness, and so, if it is to have an explanation of any kind, relations that are non-meaningful are causal appear to be needed. And, yet, as observed above, it would seem that, in offering explanations for irrationality - plugging the ‘gaps’ in consciousness - what psychoanalytic explanation hinges on is precisely the postulation of further, albeit non-apparent connections of meaning.

For these two reasons, then - the logical heterogeneity of its explanation and the ambiguous status of its explananda - it may seem that an examination in terms of the concepts of cause and meaning will provide the key to a philosophical elucidation of psychoanalysis. The possible views of psychoanalytic explanation that may result from such an examination can be arranged along two dimensions. (1) Psychoanalytic explanation may then be viewed after reconstruction, as either causal and non-meaningful, or meaningful and non-causal, or as comprising both meaningful and causal elements, in various combinations. Psychoanalytic explanation then may be viewed, on each of these reconstructions, as either licensed or invalidated depending one’s view of the logical nature of psychology.

So, for instance, some philosophical discussion infer that psychoanalytic explanation is void, simple on the grounds that it is committed to causality in psychology. On another, opposed view, it is the virtue of psychoanalytic explanation that it imputes causal relations, since only causal relations can be relevant to explaining the failures of meaningful psychological connections. On yet another view, it is psychoanalysis’ commitment to meaning which is its great fault: It s held that the stories that psychoanalysis tries to tell do not really, on examination, explain successfully. And so on.

It is fair to say that the debates between these various positions fail to establish anything definite about psychoanalytic explanation. There are two reasons for this. First, there are several different strands in Freud’s whitings, each of which may be drawn on, apparently conclusively, in support of each alternative reconstruction. Secondly, preoccupation with a wholly general problem in the philosophy of mind, that of cause and meaning, distracts attention from the distinguishing features of psychoanalytic explanation. At this point, and in order to prepare the way for a plausible reconstruction of psychoanalytic explanation. It is appropriate to take a step back, and take a fresh look at the cause-meaning issue in the philosophy of psychoanalysis.

Suppose, first, that some sort of cause-meaning compatibilism - such as that of the American philosopher Donald Davidson (1917-2003) -, holds for ordinary psychology, on this view, psychological explanation requires some sort of parallelism of causal and meaningful connections, grounded in the idea that psychological properties play causal roles determined by their content. Nothing in psychoanalytic explanation is inconsistent with this picture: After his abandonment of the early ‘Project’. Freud exceptionlessly viewed psychology as autonomous relative to neurophysiology, and at the same time as congruent with a broadly naturalistic world-view. ‘Naturalism’ is often used interchangeably with ‘physicalism’ and ‘materialism’, though each of these hints at specific doctrines. Thus, ‘physicalism’ suggests that, among the natural sciences, there is something especially fundamental about physics. And ‘materialism’ has connotations going back to eighteenth-and-nineteenth-century views of the world as essentially made of material particles whose behaviour is fundamental for explaining everything else. Moreover, ‘naturalism’ with respect to some realm is the view that everything that exists in that realm, and all those events that take place in it, are empirically accessible features of the world. Sometimes naturalism is taken to my that some realm can be in principle understood by appeal to the laws and theories of the natural sciences, but one must be careful as sine naturalism does not by itself imply anything about reduction. Historically, ‘natural’ contrasts with ‘supernatural’, but in the context of contemporary philosophy of mind where debate centres around the possibility of explaining mental phenomena as part of the natural order, it is the non-natural rather than the supernatural that is the contrasting notion. The naturalist holds that they can be so explained, while the opponent of naturalism thinks otherwise, though it is not intended that opposition to naturalism commits one to anything supernatural. Nonetheless, one should not take naturalism in regard as committing one to any sort of reductive explanation of that realm, and there are such commitments in the use of ‘physicalism’ and ‘materialism’.

If psychoanalytic explanation gives the impression that it imputes bare, meaning-free causality, this results from attending to only half the story, and misunderstanding what psychoanalysis means when it talks of psychological mechanisms. The economic descriptions of mental processes that psychoanalysis provides are never replacements for, but themselves always presuppose, characterizations of mental processes in terms of meaning. Mechanisms in psychoanalytic context are simply processes whose operation cannot be reconstructed as instances of rational functioning (they are what we might by preference call mental activities, by contrast with action) Psychoanalytic explanation’s postulation of mechanisms should not therefore be regarded as a regrettable and expugnable incursion of scientism into Freud’s thought, as is often claimed.

Suppose, alternatively, that hermeneuticists such as Habermas - who follow Dilthey beings as a interpretative practice to which the concepts of the physical sciences,. Are given - are correct in thinking that connections of meaning are misrepresented through being described as causal. Again, this does not impact negatively o psychoanalytic explanation since, as just argued, psychoanalytic explanation nowhere impute s meaning-free causation. Nothing is lost fo r psychoanalytic explanation I causation is excised from the psychological picture.

The conclusion must be that psychoanalytic explanation is at bottom indifferent to the general meaning-cause issue. The core of psychoanalysis consists in its tracing of meaningful connections with no greater or lesser commitment to causality than is involved in ordinary psychology. (Which helps to set the stage - pending appropriate clinical validation - for psychoanalysis to claim as much truth for its explanation as ordinary psychology). Also, the true key to psychoanalytic explanation, its attribution of special kinds of mental states, not recognized in ordinary psychology, whose relations to one another do not have the form of patterns of inference or practical reasoning.

In the light of this, it is easy to understand why some compatibilities and hermeneuticists assert that their own view of psychology is uniquely consistent with psychoanalytic explanation. Compatibilities are right to think that, in order to provide for psychoanalytic explanation, it is necessary to allow mental connections that are unlike the connections of reasons to the actions that they rationalize, or to the beliefs that they support: And, that, in outlining such connections, psychoanalytic explanation must outstrip the resources of ordinary psychology, which does attempt to force as much as possible into the mould of practical reasoning. Hermeneuticists, for their part, are right to think that it would be futile to postulate connections which were nominally psychological but that characterized in terms of meaning, and that psychoanalytic explanation does not respond to the ‘paradox’ of irrationality by abandoning the search for meaningful connections.

Compatibilities are, however, wrong to think that non-rational but meaningful connections require the psychological order to be conceived as a causal order. The hermeneuticists is free to postulate psychological connections that are determined by meaning but not by rationality: It is coherent to suppose that there are connections of meaning that are not -bona fide- rational connections, without these being causal. Meaningfulness is a broader concept than rationality. (Sometimes this thought has been expressed, though not helpful, by saying that Freud discovered the existence of ‘neurotic rationality.) Although an assumption of rationality is doubtless necessary to make sense of behaviour in general. It does not need to be brought into play in making sense of each instance of behaviour. Hermeneuticists, in turn, are wrong to think that the compatibility view psychology as causal signals a confusion of meaning with causality or that it must lead to compatibilism to deny that there is any qualitative difference between rational and irrational psychological connections.

All the same, the last two decades have been a period through times’ extraordinary changes, placing an encouraging well-situated plot in the psychology of the sciences. ‘Cognitive psychology’, which focuses on higher mental processes like reasoning, decision making, problem solving, language processing and higher-level processing, has become - perhaps, the - dominant paradigm among experimental psychologists, while behaviourialistically oriented approaches have gradually fallen into disfavour.

The relationships between physical behaviour and agential behaviour is controversial. On some views, all ‘actions’ are identical; to physical changes in the subjects body, however, some kinds of physical behaviour, such as ‘reflexes’, are uncontroversially not kinds of agential behaviour. On others, a subjects action must involve e some physical change, but it is not identical to it.

Both physical and agential behaviours could be understood in the widest sense. Anything a person can do - even calculating in his head, for instance - could be regarded as agential behaviour. Likewise, any physical change in a person’s body - even the firing of a certain neuron, for instance - could be regarded as physical behaviour.

Of course, to claim that the mind is ‘nothing over and above’ such-and-such kinds of behaviour, construed as either physical or agential behaviour in the widest sense, is not necessarily to be a behaviourist. The theory that the mind is a series of volitional acts - a view close to the idealist position of George Berkeley (1685-1753) - and the theory that the mind is a certain configuration of neuronal events, while both controversial, are not forms of behaviourism.

Awaiting, right along side of an approaching account for which anomalous monism may take on or upon itself is the view that there is only one kind of substance underlying all others, changing and processes. It is generally used in contrast to ‘dualism’, though one can also think of it as denying what might be called ‘pluralism’ - a view often associated with Aristotle which claims that there are a number of substances, as the corpses of times generations have let it be known. Against the background of modern science, monism is usually understood to be a form of ‘materialism’ or ‘physicalism’. That is, the fundamental properties of matter and energy as described by physics are counted the only properties there are.

The position in the philosophy of mind known as ‘anomalous monism’ has its historical origins in the German philosopher and founder of critical philosophy Immanuel Kant (1724-1804), but is universally identified with the American philosopher Herbert Donald Davidson (1917-2003), and it was he who coined the term. Davidson has maintained that one can be a monist - indeed, a physicalist - about the fundamental nature of things and events, while also asserting that there can be no full ‘reduction’ of the mental to the physical. (This is sometimes expressed by saying that there can be an ontological, though not a conceptual reduction.) Davidson thinks that complete knowledge of the brain and any related neurophysiological systems that support the mind’s activities would not itself be knowledge of such things as belief, desire, experience and the rest of mentalistic generativist of thoughts. This is not because he think that the mind is somehow a separate kind of existence: Anomalous monism is after all monism. Rather, it is because the nature of mental phenomena rules out a priori that there will be law-like regularities connecting mental phenomena and physical events in the brain, and, without such laws, there is no real hope of explaining the mental via the physical structure of the brain.

All and all, one central goal of the philosophy of science is to provided explicit and systematic accounts of the theories and explanatory strategies explored in the science. Another common goal is to construct philosophically illuminating analyses or explanations of central theoretical concepts involved in one or another science. in the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and thereby has been a great deal of work on the structure of evolutionary theory and on such crucial concepts. If concepts of th e simple (observational) sort were internal physical structures that had, in this sense, an information-carrying function, a function they acquired during learning, then instances of these structure types would have a content that (like a belief) could be either true or false. In that of ant information-carrying structure carries all kinds of information if, for example, it carries information ‘A’, it must also carry the information that ‘A’ or ‘B’. Conceivably, the process of learning is supposed to b e a process in which a single piece of this information is selected for special treatment, thereby becoming the semantic content - the meaning - of subsequent tokens of that structure type. Just as we conventionally give artefacts and instruments information-providing functions, thereby making their flashing lights, and so forth - representations of the conditions in the world in which we are interested, so learning converts neural states that carry information - ‘pointer readings’ in the head, so to speak - int structures that have the function of providing some vital piece of information they carry when this process occurs in the ordinary course of learning, the functions in question develop naturally. They do not, as do the functions of instruments and artefacts, depends on the intentions, beliefs, and attitudes of users. We do not give brain structure these functions. They get it by themselves, in some natural way, either (in th case of the senses) from their selectional history or (in the case of thought) from individual learning. The result is a network of internal representations that have (in different ways) the power representation, of experience and belief.

To understand that this approach to ‘thought’ and ‘belief’, the approach that conceives of them as forms of internal representation, is not a version of ‘functionalism’ - at least, not if this dely held theory is understood, as it often is, as a theory that identifies mental properties with functional properties. For functional properties have to do with the way something, in fact, behaves, with its syndrome of typical causes and effects. An informational model of belief, in order to account for misrepresentation, a problem with which a preliminary way that in both need something more than a structure that provided information. It needs something having that as its function. It needs something that is supposed to provide information. As Sober (1985) comments for an account of the mind we need functionalism with the function, the ‘teleological’, is put back in it.

Philosophers need not (and typically do not) assume that there is anything wrong with the science they are studying. Their goal is simply to provide accounts of he theories, concepts and explanatory strategies that scientists are using - accounts that are more explicit, systematic and philosophically sophisticated than the often rather rough-and-ready accounts offered by the scientists themselves.

Cognitive psychology is in many ways a curious and puzzling science. Many of the theories put forward by cognitive psychologists make use of a family of ‘intentional’ concepts - like believing that ‘, desiring that ‘q’, and representing ‘r’ - which do not appear in the physical or biological sciences, and these intentional concepts play a crucial role in many of the explanations offered by these theories.

It is characteristic of dialectic awareness that discussions of intentionality appeared as the paradigm cases discussed which usually are beliefs or sometimes beliefs and desires, however, the biologically most basic forms of intentionality are in perception and in intentional action. These also have certain formal features which are not common to beliefs and desire. Consider a case of perceptual experience. Suppose, I see my hand in front of my face. What are the conditions of satisfaction? First, the perceptual experience of the hand in front of my face has as its condition of satisfaction that there be a hand in front of my face. Thus far, the condition of satisfaction is the same as the belief than there is a hand in front of my face. But with perceptual experience there is this difference: In order that the intentional content be satisfied, the fact that there is a hand in front of my face must cause the very experience whose intentional content is that there is a hand in front of my face. This has the consequence that perception has a special kind of condition of satisfaction that we might describe as ‘causally self-referential’. The full conditions of satisfaction of the perceptual experience are, first that there be a hand in front of my face, and second, that there is a hand in front of my face caused the very experience of whose conditions of satisfaction forms a part. We can represent this in our acceptation of the form. S(p), such as:

Visual experience (that there is a hand in front of face

and the fact that there is a hand in front of my face

is causing this very experience.)

Furthermore, visual experiences have a kind of conscious immediacy not characterised of beliefs and desires. A person can literally be said to have beliefs and desires while sound asleep. But one can only have visual experiences of a non-pathological kind when one is fully awake and conscious because the visual experiences are themselves forms of consciousness.

People’s decisions and actions are explained by appeal to their beliefs and desires. Perceptual processes, sensational, are said to result in mental states which represent (or sometimes misrepresent) one or as another aspect of the cognitive agent’s environment. Other theorists have offered analogous acts, if differing in detail, perhaps, the most crucial idea in all of this is the one about representations. There is perhaps a sense in which what happens at, say, the level of the retina constitutes, as a result of the processes occurring in the process of stimulation, some kind of representation of what produces that stimulation, and thus, some kind of representation of the objects of perception. Or so it may seem, if one attempts to describe the relation between the structure and characteristic of the object of perception and the structure and nature of the retinal processes. One might say that the nature of that relation is such as to provide information about the part of the world perceived, in the sense of ‘information’ presupposed when one says that the rings in the sectioning of a tree’s truck provide information of its age. This is because there is an appropriate causal relation between the things which make it impossible for it to be a matter of chance. Subsequently processing can then be thought to be one carried out on what is provided in the representations in question.

However, if there are such representations, they are not representations for the perceiver, it is the thought that perception involves representations of that kind which produced the old, and now largely discredited philosophical theories of perception which suggested that perception is a matter, primarily, of an apprehension of mental states of some kind, e.g., sense-data, which are representatives of perceptual objects, either by being caused by them or in being in some way constitutive of them. Also, if it be said that the idea of information so invoked indicates that there is a sense in which the precesses of stimulation can be said to have content, but a non-conceptual content, distinct from the content provided by the subsumption of what is perceived under concepts. It mus t be emphasised that, that content is not one fo the perceiver. What the information-processing story provides is, at best, a more adequate categorization than previously available of the causal processes involved. That may be important, but more should not be claimed for it than there is. If in perception is a given case one can be said to have an experience as of an object of a certain shape and kind related to another object it is because there is presupposed in that perception the possession of concepts of objects, and more particular, a concept of space and how objects occupy space.

It is, that, nonetheless, cognitive psychologists occasionally say a bit about the nature of intentional concepts and the nature of intentional concepts and the explanations that exploit them. Their comments are rarely systematic or philosophically illuminating. Thus, it is hardly surprising that many philosophers have seen cognitive psychology as fertile grounds for the sort of careful descriptive work that is done in the philosophy of biology and the philosophy of physics. The American philosopher of mind Alan Jerry Fodor’s (1935-), The Language of Thought (1975) was a pioneering study in th genre on the field. Philosophers have, also, done important and widely discussed work in what might be called the ‘descriptive philosophy’ or ‘cognitive psychology’.

These philosophical accounts of cognitive theories and the concepts they invoke are generally much more explicit than the accounts provided by psychologists, and they inevitably smooth over some of the rough edges of scientists’ actual practice. But if the account they give of cognitive theories diverges significantly from the theories that psychologists actually produce, then the philosophers have just got it wrong. There is, however, a very different way in which philosopher’s have approached cognitive psychology. Rather than merely trying to characterize what cognitive psychology is actually doing, some philosophers try to say what it should and should not be doing. Their goal is not to explicate scientific practice, but to criticize and improve it. The most common target of this critical approach is the use of intentional concepts in cognitive psychology. Intentional notions have been criticized on various grounds. The two situated consideration are that they fail to supervene on the physiology of the cognitive agent, and that they cannot be ‘naturalized’.

Perhaps e easiest way to make the point about ‘supervenience is to use a thought experiment of the sort originally proposed by the American philosopher Hilary Putnam (1926-). Suppose that in some distant corner of the universe there is a planet, Twin Earth, which is very similar to our own planet. On Twin Earth, there is a person who is an atom for atom replica of J.F. Kennedy. Now the President J.F. Kennedy, who lives on Earth believe s that Rev. Martin Luther King Jr. was born in Tennessee. If you asked him. ‘Was the Rev. Martin Luther King Jr. born in Tennessee, In all probability the answer would either or not it is yes or no. twin, Kennedy would respond in the same way, but it is not because he believes that our Rev. Martin Luther King Jr. Was, as, perhaps, very much in question of what is true or false. His beliefs are about Twin-Luther, and that Twin -Luther was certainly not born in Tennessee, and thus, that J.F. Kennedy’s belief is true while Twin-Kennedy’s is false. What all this is supposed to show is that two people, perhaps on opposite polarities of justice, or justice as drawn on or upon human rights, can share all their physiological properties without sharing all their intentional properties. To turn this into a problem for cognitive psychology, two additional premises are needed, the first is that cognitive psychology attempts to explain behaviour by appeal to people’s intentional properties. The second, is that psychological explanations should not appeal to properties that fall to supervene on an organism’s physiology. (Variations on this theme can be found in the American philosopher Allen Jerry Fodor (1987)).

The thesis that the mental is supervenient on the physical - roughly, the claim that the mental character of a wholly determinant of its rendering adaptation of its physical nature - has played a key role in the formulation of some influential positions of the ‘mind-body’ problem. In particular versions of non-reductive ‘physicalism’, and has evoked in arguments about the mental, and has been used to devise solutions to some central problems about the mind - for example, the problem of mental causation.

The idea of supervenience applies to one but not to the other, that this, there could be no difference in a moral respect without a difference in some descriptive, or non-moral respect evidently, the idea generalized so as to apply to any two sets of properties (to secure greater generality it is more convenient to speak of properties that predicates).The American philosopher Donald Herbert Davidson (1970), was perhaps first to introduce supervenience into the rhetoric discharging into discussions of the mind-body problem, when he wrote ‘ . . .mental characteristics are in some sense dependent, or supervenient, on physical characteristics. Such supervenience might be taken to mean that there cannot be two events alike in all physical respects but differing in some mental respectfulness, or that an object cannot alter in some metal deferential submission without altering in some physical regard. Following, the British philosopher George Edward Moore (1873-1958) and the English moral philosopher Richard Mervyn Hare (1919-2003), from whom he avowedly borrowed the idea of supervenience. Donald Herbert Davidson, went on to assert that supervenience in this sense is consistent with the irreducibility of the spervient to their ‘subvenient’, or ‘base’ properties. Dependence or supervenience of this kind does not entail reducibility through law or definition . . . ‘

Thus, three ideas have purposively come to be closely associated with supervenience: (1) Property convariation, (if two things are indiscernible in base properties they must be indiscernible in supervenient properties). (2) Dependence, (supervenient properties are dependent on, or determined by, their subservient bases) and (3) non-reducibility (property convariation and dependence involved in supervenience can obtain even if supervenient properties are not reducible to their base properties.)

Nonetheless, in at least, for the moment, supervenience of the mental - in the form of strong supervenience, or, at least global supervenience - is arguably a minimum commitment to physicalism. But can we think of the thesis of mind-body supervenience itself as a theory of the mind-body relation - that is, as a solution to the mind-body problem?

It would seems that any serious theory addressing the mind-body problem must say something illuminating about the nature of psychophysical dependence, or why, contrary to common belief, there is no dependence in either way. However, if we take to consider the ethical naturalist intuitivistic will say that the supervenience, and also the dependence, for which is a brute fact you discern through moral intuition: And the prescriptivist will attribute the supervenience to some form of consistency requirements on the language of evaluation and prescription. And distinct from all of these is mereological supervenience, namely the supervenience of properties of a whole on properties and relations of its pats. What all this shows, is that there is no single type of dependence relation common to all cases of supervenience, supervenience holds in different cases for different reasons, and does not represent a type of dependence that can be put alongside causal dependence, meaning dependence, mereological dependence, and so forth.

There seems to be a promising strategy for turning the supervenience thesis into a more substantive theory of mind, and it is that to explicate mind-body supervenience as a special case of mereological supervenience - that is, the dependence of the properties of a whole on the properties and relations characterizing its proper parts. Mereological dependence does seem to be a special form of dependence that is meta-physically sui generis and highly important. If one takes this approach, one would have to explain psychological properties as macroproperties of a whole organism that covary, in appropriate ways, with its microproperties, i.e., the way its constituent organs, tissues, and so forth, are organized and function. This more specific supervenience thesis may well be a serious theory of the mind-body relation that can compete for the classic options in the field.

On this topic, as with many topics in philosophy, there is a distinction to be made between (1) certain vague, partially inchoate, pre-theoretic ideas and beliefs about the matter at hand, and (2) certain more precise, more explicit, doctrines or theses that are taken to articulate or explicate those pre-theoretic ideas and beliefs. There are various potential ways of precisifying our pre-theoretic conception of a physicalist or materialist account of mentality, and the question of how best to do so is itself a matter for ongoing, dialectic, philosophical inquiry.

The view concerns, in the first instance, at least, the question of how we, as ordinary human beings, in fact go about ascribing beliefs to one another. The idea is that we do this on the basis of our knowledge of a common-sense theory of psychology. The theory is not held to consist in a collection of grandmotherly saying, such as ‘once bitten, twice shy’. Rather it consists in a body of generalizations relating psychological states to each other to input from the environment, and to actions. Such may be founded on or upon the grounds that show or include the following:

(1) (x)(p)(if x fears that p , then x desires that not-p.)

(2) (x)(p)(if x hopes that p and • hopes that p and

• discovers that p, then • is pleased that p.)

(3) (x)(p)(q) (If x believes that p and • believes that

if p, then q, barring confusion, distraction and so

forth. • believes that q.)

(4) (x)(p)(q) (If x desires that p and x believes that if q then

p, and x is able to bring it about that q, then, barring

conflict desires or preferred strategies, x brings it about

that q.)

All of these generalizations should be understood as containing ceteris paribus clauses. (1), for example, applies mos t of the time, but not invariably. Adventurous types often enjoy the adrenal thrill produced by fear./ this leads them, on occasion, to desire the very state of affairs that frightens them. Analogously, with (3). A subject who believes that ‘p’ nd believes that if ‘p’, then ‘q’. Would typically infer that ‘q’. But certain atypical circumstances may intervene: Subjects may become confused or distracted, or they ma y find the prospect of ‘q’ so awful that they dare not allow themselves to believe it. The ceteris paribus nature of these generalizations is not usually considered to be problematic, since atypical circumstances are, of course, atypical, and the generalizations are applicable most of the time.

We apply this psychological theory to make inference about people’s beliefs, desires and so forth. If, for example, we know that Julie believes that if she is to be at the airport at four, then she should get a taxi at half past two, and she believes that she is to be at the airport at four, then we will predict, using (3), that Julie will infer that she should get a taxi at half past two.

The Theory-Theory, as it is called, is an empirical theory addressing the question of our actual knowledge of beliefs. Taken in its purest form if addressed both first- and third-person knowledge: We know about our own beliefs and those of others in the same way, by application of common-sense psychological theory in both cases. However, it is not very plausible to hold that we always - or, indeed usually - know our own beliefs by way of theoretical inference. Since it is an empirical theory concerning one of our cognitive abilities, the Theory-Theory is open to psychological scrutiny. Various issues of the hypothesized common-sense psychological theory, we need to know whether it is known consciously or unconsciously. Nevertheless, research has revealed that three-year-old children are reasonably god at inferring the beliefs of others on the basis of actions, and at predicting actions on the basis of beliefs that others are known to possess. However, there is one area in which three-year-old’s psychological reasoning differs markedly from adults. Tests of the sorts are rationalized in such that: ‘False Belief Tests’, reveal largely consistent results. Three-year-old subjects are witness to th scenario about the child, Billy, sees his mother place some biscuits in a biscuit tin. Billy then goes out to play, and, unseen by him, his mother removes the biscuit from the tin and places them in a jar, which is then hidden in a cupboard. When asked, ‘Where will Billy look for the biscuits’? The majority of three-year-olds answer that Billy will look in the jar in the cupboard - where the biscuits actually are, than where Billy saw them being placed. On being asked ‘Where does Billy think the biscuits are’? They again, tend to answer ‘in the cupboard’, rather than ‘in the jar’. Three-year-olds thus, appear to have some difficulty attributing false beliefs to others in case in which it would be natural for adults to do so. However, it appears that three-year-olds are lacking the idea of false beliefs in general, nor does it appear that they struggle with attributing false beliefs in other kinds of situation. For example, they have little trouble distinguishing between dreams and play, on the one hand, and true beliefs or claims on the other. By the age of four and a half years, most children pass the False Belief Tests fairly consistently. There is yet no general accepted theory of why three-year-olds fare so badly with the false beliefs tests, nor of what it reveals about their conception of beliefs.

Recently some philosophers and psychologists have put forward what they take to be an alternative to the Theory-Theory: However, the challenge does not end there. We need also to consider the vital element of making appropriate adjustments for differences between one’s own psychological states and those of the other. Nevertheless, it is implausible to think in every such case of simulation, yet alone will provide the resolving obtainability to achieve.

The evaluation of the behavioural manifestations of belief, desires, and intentions are enormously varied, every bit as suggested. When we move away from perceptual beliefs, the links with behaviour are intractable and indirect: The expectations I form on the basis of a particular belief reflects the influence of numerous other opinions, my actions are formed by the totality of my preferences and all those opinions which have a bearing on or upon them. The causal processes that produce my beliefs reflect my opinions about those processes, about their reliability and the interference to which they are subject. Thus, behaviour justifies the ascription of a particular belief only by helping to warrant a more inclusive interpretation of the overall cognitive position of the individual in question. Psychological descriptions, like translation, is a ‘holistic’ business. And once this is taken into account, it is all the less likely that a common physical trait will be found which grounds all instances of the same belief. The ways in which all of our propositional altitudes interact in the production of behaviour reinforce the anomalous character of the mental and render any sort of reduction of the mental to the physical impossibilities. Such is not meant as a practical procedure, it can, however, generalize on this so that interpretation and merely translation is at issue, has made this notion central to methods of accounting responsibilities of the mind.

Theory and Theory-Theory are two, as many think competing, views of the nature of our common-sense, propositional attitude explanations of action. For example, when we say that our neighbour cut down his apple tree because he believed that it was ruining his patio and did not want it ruined, we are offering a typically common-sense explanation of his action in terms of his beliefs and desires. But, even though wholly familiar, it is not clear what kind of explanation is at issue. Connected of one view, is the attribution of beliefs and desires that are taken as the application to actions of a theory which, in its informal way, functions very much like theoretical explanations in science. This is known as the ‘theory-theory’ of everyday psychological explanation. In contrast, it has been argued that our propositional attributes are not theoretical claims do much as reports of a kind of ‘simulation’. On such a ‘simulation theory’ of the matter, we decide what our neighbour will mdo (and thereby why he did so) by imagining ourselves in his position and deciding what we would do.

The Simulation Theorist should probably concede that simulations need to be backed up by the independent means of discovering the psychological states of others. But they need not concede that these independent means take the form of a theory. Rather, they might suggest, we can get by with some rules of thumb, or straightforward inductive reasoning of a general kind.

A second and related difficulty with the Simulation Theory concerns our capacity to attribute beliefs that are too alien to be easily simulated: Beliefs of small children, or psychotics, or bizarre beliefs deeply suppressed in the unconscious latencies. The small child refuses to sleep in the dark: He is afraid that the Wicked Witch will steal him away. No matter how many adjustments we make, it may be hard for mature adults to get their own psychological processes, even in pretended play, to mimic the production of such belief. For the Theory-Theory alien beliefs are not particularly problematic: So long as they fit into the basic generalizations of the theory, they will be inferrable from the evidence. Thus, the Theory-Theory can account better for our ability to discover more bizarre and alien beliefs than can the Simulation Theory.

The Theory-Theory and the Simulation Theory are not the only proposals about knowledge of belief. A third view has its origins in the Austrian philosopher Ludwig Wittgenstein (1889-1951). On this view both the Theory and Simulation Theories attribute too much psychologizing to our common-sense psychology. Knowledge of other minds is, according to this alternative picture, more observational in nature. Beliefs, desires, feelings are made manifest to us in the speech and other actions of those with whom we share a language and way of life. When someone says. ‘Its going to rain’ and takes his umbrella from his bag. It is immediately clear to us that he believes it is going to rain. In order to know this we neither theorize nor simulate: We just perceive. Of course, this is not straightforward visual perception of the sort that we use to see the umbrella. But it is like visual perception in that it provides immediate and non-inferential awareness of its objects. We might call this the ‘Observational Theory’.

The Observational Theory does not seem to accord very well with the fact that we frequently do have to indulge in a fair amount of psychologizing to find in what others believe. It is clear that any given action might be the upshot of any number of different psychological attitudes. This applies even in the simplest cases. For example, because one’s friend is suspended from a dark balloon near a beehive, with the intention of stealing honey. This idea to make the bees behave that it is going to rain and therefore believe that the balloon as a dark cloud, and therefore pay no attention to it, and so fail to notice one’s dangling friend. Given this sort of possibility, the observer would surely be rash immediately to judge that the agent believes that it is going to rain. Rather, they would need to determine - perhaps, by theory, perhaps by simulation - which of the various clusters of mental states that might have led to the action, actually did so. This would involve bringing in further knowledge of the agent, the background circumstances and so forth. It is hard to see how the sort of complex mental processes involved in this sort of psychological reflection could be assimilated to any kind of observation.

The attributions of intentionality that depend on optimality or rationality are interpretations of the assumptive phenomena - a ‘heuristic overlay’ (1969), describing an inescapable idealized ‘real pattern’. Like such abstractions, as centres of gravity and parallelograms of force, the beliefs and desires posited by the highest stance have noo independent and concrete existence, and since this is the case, there would be no deeper facts that could settle the issue if - most importantly - rival intentional interpretations arose that did equally well at rationalizing the history of behaviour f an entity. Orman van William Quine 1908-2000), the mos influential American philosopher of the latter half of the 20th century, whose thesis on the indeterminacy of radical translation carries all the way in the thesis of the indeterminacy of radical interpretation of mental states and processes.

The fact that cases of radical indeterminacy, though possible in principle, are vanishingly unlikely ever to comfort us in small solacing refuge and shelter, apparently this idea is deeply counter-intuitive to many philosophers, who have hankered for more ‘realistic’ doctrines. There are two different strands of ‘realism’ that in the attempt to undermine are such:

(1) Realism about the entities purportedly described by four

everyday mentalistic discourse - what I dubbed as

folk-psychology, such as beliefs, desires, pains, the self.

(2) Realism about content itself - the idea that there have

to be events or entities that really have intentionality

(as opposed to the events and entities that only have as

if they had intentionality).

The tenet indicated by (1) rests of what is fatigue, what bodily states or events are so fatiguing, that they are identical with, and so forth. This is a confusion that calls for diplomacy, not philosophical discovery: The choice between an ‘eliminative materialism’ and an ‘identity theory’ of fatigues is not a matter of which ‘ism’ is right, but of which way of speaking is most apt to wean these misbegotten features of them as conceptual schemata.

Again, the tenet (2) my attack has been more indirect. The view that some philosophers, in that of a demand for content realism as an instance of a common philosophical mistake: Philosophers oftentimes manoeuvre themselves into a position from which they can see only two alternatives: Infinite regress versus some sort of ‘intrinsic’ foundation - a prime mover of one sort or another. For instance, it has seemed obvious that for some things to be valuable as means, other things must be intrinsically valuable - ends in themselves - otherwise we would be stuck with a vicious regress (or, having no beginning or end) of things valuable only that although some intentionality is ‘derived’ (the ‘aboutness’ of the pencil marks composing a shopping list is derived from the intentions of the person whose list it is), unless some intentionality is ‘original’ and underived, there could be no derived intentionality.

There is always another alternative, namely, a finite regress that peters out without marked foundations or thresholds or essences. Here is an avoided paradox: Every mammal has a mammal for a mother - but, this implies an infinite genealogy of mammals, which cannot be the case. The solution is not to search for an essence of mammalhood that would permit us in principle to identify the Prime Mammal, but rather to tolerate a finite regress that connects mammals to their non-mammalian ancestors by a sequence that can only be partitioned arbitrarily. The reality of today’s mammals is secure without foundations.

The best instance of tis theme is held to the idea that the way to explain the miraculous-seeming powers of an intelligent intentional system is to decompose it into hierarchically structured teams of ever more stupid intentional systems, ultimately discharging all intelligence-debts in a fabric of stupid mechanisms. Lycan (1981), has called this view ‘homuncular functionalism’. One may be tempted to ask: Are the subpersonal components ‘real’ intentional systems? At what point in the diminutions of prowess as we descend to simple neurons does ‘real’ intentionality disappear? Don’t ask. The reasons for regarding an individual neuron (or a thermostat) as a intentional system are unimpressive, but zero, and the security of our intentional attributions at the highest lowest-level of real intentionality. Another exploitation of the same idea is found in Elbow Room (1984): Ast what point in evolutionary history did real reason-appreciators real selves, make their appearance? Don’t ask - for the dame reason. Here is yet another, more fundamental version of evolution can point in the early days of evolution can we speak of genuine function, genuine selection-for and not mere fortuitous preservation of entities that happen to have some self-replicative capacity? Don’t ask. Many of the more interesting and important features of our world have emerged, gradually, from a world that initially lacked them - function, intentionality, consciousness, morality, value - and it is a fool’s errand to try to identify a first or most-simple instance of the ‘real’ thin. It is for the same reason a mistake must exist to answer all the questions our system of content attribution permit us to ask. Tom says he has an older brother in Toronto and that he is an only child. What does he really believe? Could he really believe that he had a but if he also believed he was an only child? What is the ‘real’ content of his mental state? There is no reason to suppose there is a principled answer.

The most sweeping conclusion having drawn from this theory of content is that the large and well-regarded literature on ‘propositional attitudes’ (especially the debates over wide versus narrow content) is largely a disciplinary artefact of no long-term importance whatever, except perhaps, as history’s most slowly unwinding unintended reductio ad absurdum. By and large, the disagreements explored in that literature cannot even be given an initial expression unless one takes on the assumption of an unsounded fundamentality of strong realism about content, and its constant companion, the idea of a ‘language of thought’ a system of mental representation that is decomposable into elements rather like terms, and large elements rather like sentences. The illusion, that this is plausible, or even inevitable, is particularly fostered by the philosophers’ normal tactic of working from examples of ‘believing-that-p’ that focus attention on mental states that are directly or indirectly language-infected, such as believing that the shortest spy is a spy, or believing that snow is white. (Do polar bears believe that snow is white? In the way we do?) There are such states - in language-using human beings - but, they are not exemplary r foundational states of belief, needing a term for them. As, perhaps, in calling the term in need of, as they represent ‘opinions’. Opinions play a large, perhaps even decisive role in our concept of a person, but they are not paradigms of the sort of cognitive element to which one can assign content in the first instance. If one starts, as one should, with the cognitive states and events occurring in non-human animals, and uses these as the foundation on which to build theories of human cognition, the language-infected states are more readily seen to be derived, less directly implicated in the explanation of behaviour, and the chief but illicit source of plausibility of the doctrine of a language of thought. Postulating a language of thought is in any event a postponement of the central problems of content ascribed, not a necessary first step.

Our momentum, regardless, forces out the causal theories of epistemology, of what makes a belief justified and what makes a true belief knowledge? It is natural to think that whether a belief deserves one of these appraisals depends on what caused the subject to have the belief. In recent decades a number of epistemologists have pursued this plausible idea with a variety of specific proposals. For some proposed casual criteria for knowledge and justification are for us, to take under consideration.

Some causal theories of knowledge have it that a true belief that ‘p’ is knowledge just in case it has the right sort of causal connection to the fact that ‘p’. Such a criteria can be applied only to cases where the fact that ‘p’, a sort that can enter into causal relations: This seems to exclude mathematical and other necessary facts and perhaps any fact expressed by a universal generalization. And proponents of this sort of criterion have usually supposed that it is limited to perceptual knowledge of particular facts about the subject’s environment.

For example, the forthright Australian materialist David Malet Armstrong (1973), proposed that a belief of the form ‘This (perceived) object is ‘F’ is (non-inferential) knowledge if and only if the belief is a completely reliable sign that the perceived object is ‘F’, that is, the fact that the object is ‘F’ contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictate that, for any subject ‘x’ and perceived object ‘y’. If ‘x’ has those properties and believes that ‘y’ is ‘F’, then ‘y’ is ‘F’. Dretske (1981) offers a rather similar account in terms of the belief’s being caused by a signal received by the perceiver that carries the information that the object is ‘F’.

This sort of condition fails, however, to be sufficient t for non-inferential perceptual knowledge because it is compatible with the belief’s being unjustified, and an unjustified belief cannot be knowledge. For example, suppose that your mechanisms for colour perception are working well, but you have been given good reason to think otherwise, to think, say, that any tinted colour in things that look brownishly-tinted to you and brownishly-tinted things look of any tinted colour. If you fail to heed these results you have for thinking that your colour perception is awry and believe of a thing that looks colour tinted to you that it is colour tinted, your belief will fail to b e justified and will therefore fail to be knowledge, even though it is caused by the thing’s being tinted in such a way as to be a completely reliable sign (or to carry the information) that the thing is tinted or found of some tinted discolouration.

One could fend off this sort of counter-example by simply adding to the causal condition the requirement that the belief be justified. But this enriched condition would still be insufficient. Suppose, for example, that in an experiment you are given a drug that in nearly all people (but not in you, as it happens) causes the aforementioned aberration in colour perception. The experimenter tells you that you’re taken such a drug that says, ‘No, wait a minute, the pill you took was just a placebo’. But suppose further that this last ting the experimenter tells you is false. Her telling you this gives you justification for believing of a thing that looks colour tinted or tinged in brownish tones, but in fact about this justification that is unknown to you (that the experimenter’s last statement was false) makes it the casse that your true belief is not knowledge even though it satisfies Armstrong’s causal condition.

Goldman (1986) has proposed an important different sort of causal criterion, namely, that a true belief is knowledge if it is produced by a type of process that a ‘global’ and ‘locally’ reliable. It is global reliability of its propensity to cause true beliefs is sufficiently high. Local reliability had to do with whether the process would have produced a similar but false belief in certain counter-factual situations alternative to the actual situation. This way of marking off true beliefs that are knowledge e does not require the fact believed to be causally related to the belief and so it could in principle apply to knowledge of any kind of truth.

Goldman requires the global reliability of the belief-producing process for the justification of a belief, he requires, also for knowledge because justification is required for knowledge. What he requires for knowledge but does not require for justification is local reliability. His idea is that a justified true belief is knowledge if the type of process that produced it would not have produced it in any relevant counter-factual situation in which it is

The theory of relevant alternative is best understood as an attempt to accommodate two opposing strands in our thinking about knowledge. The first is that knowledge is an absolute concept. On one interpretation, tis means that the justification or evidence one must have an order to know a proposition ‘p’ must be sufficient to eliminate all the alternatives to ‘p’ (when an alternative to a proposition ‘p’ is a proposition incompatible with ‘p’).

For knowledge requires only that elimination of the relevant alternatives. So the relevant alternatives view preservers both strands in our thinking about knowledge. Knowledge is an absolute concept , but because the absoluteness is relative to a standard, we can know many things.

The relevant alternatives account of knowledge can be motivated by noting that other concepts exhibit the same logical structure e. two examples of this are the concepts ‘flat’ and the concept ‘empty’. Both appear to be absolute concepts - a space is empty only if it does not contain anything and a surface is flat only if it does not have any bumps. However, the absolute character of these concepts is relative to a standard. In the case of flat, there is a standard for what there is a standard for what counts as a bump and in the case of empty, there is a standard for what counts as a thing. We would not deny that a table is flat because a microscope reveals irregularities in its surface. Nor would we den y that a warehouse is empty because it contains particles of dust. To be flat is to be free of any relevant bumps. To be empty is to be devoid of all relevant things. Analogously, the relevant alternatives theory says that to know a proposition is to have evidence that eliminates all relevant alternatives.

Some philosophers have argued that the relevant alternatives theory of knowledge entails the falsity of the principle that set of known (by S) propositions in closed under known (by S) entailment, although others have disputed this however, this principle affirms the following conditional or the closure principle:

If S knows p sand S knows that p entails q, then S knows q.

According to the theory of relevant alternatives, we can know a proposition ‘p’, without knowing that some (non-relevant) alterative to ‘p’ is false. But, once an alternative ‘h’ to ‘p’ incompatible with ‘p’, then ‘p’ will trivially entail not-h. Soi it will be possible to know some proposition without knowing another proposition trivially entailed by it. For example, we can know that we see a zebra without knowing that it is not the case that we see a cleverly disguised mule (on the assumption that ‘ewe see a cleverly disguised mule’ is not a relevant alterative). This will involve a violation of the closure principle. This is an interesting consequence of the theory because the closure principle seems to many to be quite intuitive. In fact, we can view sceptical arguments as employing the closure principle as a premise, along with the premise that we do not know that the alternatives raised by the sceptic are false. From these two premisses, it follows (on the assumption that we see that the propositions we believe entail the falsity of sceptical alternatives) that we do not know the proposition we believe. For example, it follows from the closure principle and the fact that we do not know that we do not see a cleverly disguised mule, that we do not know that we see a zebra. We can view the relevant alternatives theory as replying to the sceptical arguments by denying the closure principle.

What makes an alternative relevant? What standard do the alternatives raised by the sceptic fail to meet? These notoriously difficult to answer with any degree of precision or generality. This difficulty has led critics to view the theory as something being to obscurity. The problem can be illustrated though an example. Suppose Smith sees a barn and believes that he does, on the basis of very good perceptual evidence. When is the alternative that Smith sees a paper-maché replica relevant? If there are many such replicas in the immediate area, then this alternative can be relevant. In these circumstances, Smith fails to know that he sees a barn unless he knows that it is not the case that he sees a barn replica. Where no such replica exist, this alternative will not be relevant. Smith can know that he sees a barn without knowing that he does not see a barn replica.

This suggests that a criterion of relevance is something like probability conditional on Smith’s evidence and certain features of the circumstances. But which circumstances in particular do we count? Consider a case where we want the result that the barn replica alternative is clearly relevant, e.g., a case where the circumstances are such that there are numerous barn replicas in the area. Does the suggested criterion give us the result we wanted? The probability that Smith sees a barn replica given his evidence and his location to an area where there are many barn replicas is high. However, that same probability conditional on his evidence and his particular visual orientation toward a real barn is quite low. We want the probability to be conditional on features of the circumstances like the former bu t not on features of the circumstances like the latter. But how do we capture the difference in a general formulation?

How significant a problem is this for the theory of relevant alternatives? This depends on how we construe theory. If the theory is supposed to provide us with an analysis of knowledge, then the lack of precise criteria of relevance surely constitute a serious problem. However, if the theory is viewed instead as providing a response to sceptical arguments, it can be argued that the difficulty has little significance for the overall success of the theory.

What justifies the acceptance of a theory? Although particular versions of empiricism have met many criticisms, it still attractive to look for an answer in some sort of empiricist terms: In terms, that is, of support by the available evidence. How else could objectivity of science be defended except by showing that its conclusions (and in particular its theoretical conclusion - those theories it presently accepts) are somehow legitimately based on agreed observational and experimental evidence? But, as is well known, theories in general pose a problem for empiricism.

Allowing the empiricist the assumption that there are observational statements whose truth-values can be inter-subjectively agreed, and show the exploratory, non-demonstrative use of experiment in contemporary science. Yet philosophers identify experiments with observed results, and these with the testing of theory. They assume that observation provides an open window for the mind onto a world of natural facts and regularities, and that the main problem for the scientist is to establish the unique or the independence of a theoretical interpretation. Experiments merely enable the production of (true) observation statements. Shared, replicable observations are the basis for scientific consensus about an objective reality. It is clear that mos t scientific claims are genuinely theoretical: Nether themselves observational nor derivable deductively from observation statements (nor from inductive generalizations thereof). Accepting that there are phenomena that we have more or less diet access to, then, theories seem, at least when taken literally, to tell us about what is going on ‘underneath’ the observable, directly accessible phenomena on order to produce those phenomena. The accounts given by such theories of this trans-empirical reality, simply because it is trans-empirical, can never be established by data, nor even by the ‘natural’ inductive generalizations of our data. No amount of evidence about tracks in cloud chambers and the like, can deductively establish that those tracks are produced by ‘trans-observational’ electrons.

One response would, of course, be to invoke some strict empiricist account of meaning, insisting that talk of electrons and the like, is, in fact just shorthand for talks in cloud chambers and the like. This account, however, has few, if any, current defenders. But, if so, the empiricist must acknowledge that, if we take any presently accepted theory, then there must be alternatives, different theories (indefinitely many of them) which treat the evidence equally well - assuming that the only evidential criterion is the entailment of the correct observational results.

All the same, there is an easy general result as well: assuming that a theory is any deductively closed set of sentences, and assuming, with the empiricist that the language in which these sentences are expressed has two sorts of predicated (observational and theoretical), and, finally, assuming that the entailment of the evidence is only constraint on empirical adequacy, then there are always indefinitely many different theories which are equally empirically adequate in a language in which the two sets of predicates are differentiated. Consider the restricts if ‘T’ to quantifier-free sentences expressed purely in the observational vocabulary, then any conservative extension of that restricted set of T’s consequences back into the full vocabulary is a ‘theory’ co-empirically adequate with - entailing the same singular observational statements as - ‘T’. Unless veery special conditions apply (conditions which do not apply to any real scientific theory), then some of the empirically equivalent theories will formally contradict ‘T’. (A similar straightforward demonstration works for the currently more fashionable account of theories as sets of models.)

How can an empiricist, who rejects the claim that two empirically equivalent theories are thereby fully equivalent, explain why the particular theory ‘T’ that is, as a matter of fact, accepted in science is preferred these other possible theories ‘T’, with the same observational content? Obviously the answer must be ‘by bringing in further criteria beyond that of simply having the right observational consequence. Simplicity, coherence with other accepted these and unity are favourite contenders. There are notorious problems in formulating ths criteria at all precisely: But suppose, for present purposes, that we have a strong enough intuitive grasp to operate usefully with them. What is the status of such further criteria?

The empiricist-instrumentalist position, newly adopted and sharply argued by van Fraassen, is that those further criteria are ‘pragmatic’ - that is, involved essential reference to ourselves as ‘theory-users’. We happen tp prefer, for our own purposes, since, coherent, unified theories - but this is only a reflection of our preference es. It would be a mistake to think of those features supplying extra reasons to believe in the truth (or, approximate truth) of the theory that has them. Van Fraassen’s account differs from some standard instrumentalist-empiricist account in recognizing the extra content of a theory (beyond its directly observational content) as genuinely declarative, as consisting of true-or-false assertions about the hidden structure of the world. His account accepts that the extra content can neither be eliminated as a result of defining theoretical notions in observational terms, nor be properly regarded as only apparently declarative but in fact as simply a codification schemata. For van Fraassen, if a theory say that there are electrons, then the theory should be taken as meaning what it says - and this without any positivist divide debasing reinterpretations of the meaning that might make ‘There are electrons’ mere shorthand for some complicated set of statements about tracks in obscure chambers or the like.

In the case of contradictory but empirically equivalent theories, such as the theory T1 that ‘there are electrons’ and the theory T2 that ‘all the observable phenomena as if there are electrons but there are not ‘t’. Van Fraassen’s account entails that each has a truth-value, at most one of which is ‘true’, is that science need not to T2, but this need not mean that it is rational believe that it is more likely to be true (or otherwise appropriately connected with nature). So far as belief in the theory is belief but T2. The only belief involved in the acceptance of a theory is belief in the theorist’s empirical adequacy. To accept the quantum theory, for example, entails believing that it ‘saves the phenomena’ - all the (relevant) phenomena, but only the phenomena, theorists do ‘say more’ than can be checked empirically even in principle. What more they say may indeed be true, but acceptance of the theory does not involve belief in the truth of the ‘more’ that theorist say.

Preferences between theories that are empirically equivalent are accounted for, because acceptance involves more than belief: As well as this epistemic dimension, acceptance also has a pragmatic dimension. Simplicity, (relative) freedom from ads hoc assumptions, ‘unity’, and the like are genuine virtues that can supply good reasons to accept one theory than another, but they are pragmatic virtues, reflecting the way we happen to like to do science, rather than anything about the world. Simplicity to think that they do so: The rationality of science and of scientific practices can be in truth (or approximate truth) of accepted theories. Van Fraassen’s account conflicts with what many others see as very strong intuitions.

The most generally accepted account of this distinction is that a theory of justification is internalist if and only if it requires that all of the factors needed for a belief to be epistemologically justified for a given person to be cognitively accessible to that person, internal to his cognitive perceptive, and externalist, if it allow s that, at least some of the justifying factors need not be thus accessible, so that they can be external to the believer’s cognitive perspective, beyond his knowingness. However, epistemologists often use the distinction between internalist and externalist theories of epistemic explication.

The externalism/internalism distinction has been mainly applied to theories of epistemic justification. It has also been applied in a closely related way to accounts of knowledge and a rather different way to accounts of belief and thought content. The internalist requirement of cognitive accessibility can be interpreted in at least two ways: A strong version of internalism would require that the believer actually be aware of the justifying factors in order to be justified while a weaker version would require only that he be capable of becoming aware of them by focussing his attention appropriately. But without the need for any change of position, new information, and so forth. Though the phrase ‘cognitively accessible’ suggests the weak interpretation, therein intuitive motivation for intentionalism, viz, the idea that epistemic justification requires that the believer actually have in his cognitive possession a reason for thinking that the belief is true, wherefore, it would require the strong interpretation.

Perhaps the clearest example of an internalist position would be a ‘foundationalist’ view according to which foundational beliefs pertain to immediately experienced states of mind other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Such a view could count as either a strong or a weak version of internalism, depending on whether actual awareness of the justifying elements or only the capacity to become aware of them is required. Similarly, a ‘coherentist’ view could also be internalist, if both the beliefs or other states with which a justification belief is required to cohere and the coherence relations themselves are reflectively accessible.

It should be carefully noticed that when internalism is construed in this way, it is neither necessary nor sufficient by itself for internalism that the justifying factors literally be internal mental states of the person in question. Not necessarily, because on at least some views, e.g., a direct realist view of perception, something other than a mental state of the believer can be cognitively accessible: Not sufficient, because there are views according to which at least some mental states need not be actual (strong version) or even possible (weak versions) objects of objective awareness. Also, on this way of drawing the distinction, a hybrid view (like the ones already mentioned), according to which some of the factors required for justification must be cognitively accessible while others need not and in general will not be, would count as an externalist view. Obviously too, a view that was externalist in relation to a strong version of internalism (by not requiring that the believer actually be aware of all justifying factors) could still be internalist in relation to a weak version (by requiring that he at least be capable of becoming aware of them).

The most prominent recent externalist views have been versions of ‘reliabilism’, whose main requirements for justification is roughly that the belief be produce d in a way or via a process that make it objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relation of reliability in question. Lacking such access, such a person will in general have or likely to be true, but will, on such an account, nonetheless, be epistemologically justified in accepting it. Thus, such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that the belief is true. An epistemological working within this tradition is likely to feel that the externalist, than offering a competing account on the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.

Two general lines of argument are commonly advanced in favour of justificatory externalism. The first starts from the allegedly common-sensical premise that knowledge can be un-problematically ascribed to relativity unsophisticated adults, to young children and even to higher animals. It is then argued that such ascriptions would be untenable on the standard internalist accounts of epistemic justification (assuming that epistemic justification is a necessary condition for knowledge), since the beliefs and inferences involved in such accounts are too complicated and sophisticated to be plausibly ascribed to such subjects. Thus, only an externalist view can make sense of such common-sense ascriptions and this, on the presumption that common-sense is correct, constitutes a strong argument in favour of externalism. An internalist may respond by externalism. An internalist may respond by challenging the initial premise, arguing that such ascriptions of knowledge are exaggerated, while perhaps at the same time claiming that the cognitive situation of at least some of the subjects in question. Is less restricted than the argument claims. A quite different response would be to reject the assumption that epistemic justification is a necessary condition for knowledge, perhaps, by adopting an externalist account of knowledge, rather than justification, as those aforementioned.

The second general line of argument for externalism points out that internalist views have conspicuously failed to provide defensible, non-sceptical solutions to the classical problems of epistemology. In striking contrast, however, such problems are in general easily solvable on an externalist view. Thus, if we assume both that the various relevant forms of scepticism are false and that the failure of internalist views so far is likely to be remedied in the future, we have good reason to think that some externalist view is true. Obviously the cogency of this argument depends on the plausibility of the two assumptions just noted. An internalist can reply, first, that it is not obvious that internalist epistemology is doomed to failure, that the explanation for the present lack of success may simply be the extreme difficulty of the problems in question. Secondly, it can be argued that most of even all of the appeal of the assumption that the various forms of scepticism are false depends essentially on the intuitive conviction that we do have reasons our grasp for thinking that the various beliefs questioned by the sceptic are true - a conviction that the proponent of this argument must of course reject.

The main objection to externalism rests on the intuition that the basic requirements for epistemic justification is that the acceptance of the belief in question be rational or responsible in relation to the cognitive goal of truth, which seems to require in turn that the believer actually be aware of a reason for thinking that the belief is true (or at the very least, that such a reason be available to him. Since the satisfaction of a externalist condition is neither necessary nor sufficient for the existence of such a cognitively accessible reason. It is argued, externalism is mistaken as an account of epistemic justification . This general point has been elaborated by appeal to two sorts of putative intuitive counter-examples to externalism. The first of these challenges the necessity justification by appealing to examples of belief which seem intuitively to be justified, but for which the externalist conditions are not satisfied. The standard examples of this sort are cases where beliefs produced in some very non-standard way, e.g., by a Cartesian demon, but nonetheless, in such a way that the subjective experience of the believer is indistinguishable on that of someone whose beliefs are produced more normally. Cases of this general sort can be constructed in which any of the standard externalist condition, e.g., that the belief be a result of a reliable process, fail to be satisfied. The intuitive claim is that the believer in such a case is nonetheless, epistemically justified, inasmuch as one whose belief is produced in a more normal way, and hence that externalist accounts of justification must be mistaken.

Perhaps the most interesting reply to this sort of counter-example, on behalf of reliabilism specifically, holds that reliability of a cognitive process is to be assessed in ‘normal’ possible worlds, i.e., in possible worlds that are actually the way our world is common-scenically believed to be, rather than in the world which actually contains the belief being judged. Since the cognitive processes employed in the Cartesian demon case are, we may assume, reliable when assessed in this way, the reliabilist can agree that such beliefs are justified. The obvious further issue is whether or not there is an adequate rationale for this construal of reliabilism, so that the reply is not merely ad hoc.

The second, correlative way of elaborating the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. Here the most widely discussed examples have to do with possible occult cognitive capacities like clairvoyance. Considering the point in application once again to reliabilism specifically, the claim is that a reliable clairvoyant who has no reason to think that he has such a cognitive power, and perhaps even good reasons to the contrary, is not rational or responsible and hence, not epistemologically justified in accepting the belief that result from his clairvoyance, despite the fact that the reliabilist condition is satisfied.

One sort of response to this latter sort of objection is to ‘bite the bullet’ and insist that such believer e in fact justified, dismissing the seeming intuitions to the contrary as latent internalist prejudice. A more widely adopted response attempts to impose additional conditions, usually of a roughly internalist sort, which will rule out the offending example while still stopping far short of a full internalist . But while there is little doubt that such modified versions of externalism can indeed handle particular cases well enough to avoid clear intuitive implausibility, the issue is whether there will bot always be equally problematic cases that the cannot handle, and also whether there is any clear motivation for the additional requirements other than the general internalist view of justification that externalists are committed to reject.

A view in this same general vein, one that might be described as a hybrid of internalism and externalism, holding that epistemic justification requires that there be a justificatory facto r that is cognitively accessible e to the believer in question (though it need not be actually grasped), thus ruling out, e.g., a pure reliabilism. at the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, this further fact need not be in any way grasped o r cognitive ly accessible to the believer. In effect, of the two premises needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weak internalism, while the second can be (and normally will be) purely external. Here the internalist will respond that this hybrid view is of no help at all in meeting the objection that the belief is not held in the rational responsible way that justification intuitively seems required, for the believer in question, lacking one crucial premise, still has no reason at all for thinking that his belief is likely to be true.

An alternative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., is a result of a reliable process (and, perhaps, further conditions as well). This makes it possible for such a view to retain an internalist account of epistemic justification, though the centrality of that concept is epistemology would obviously be seriously diminished.

Such an externalist account of knowledge can accommodate the common-sen conviction that animals, young children and unsophisticated adults posses knowledge, though not the weaker conviction (if such a conviction even exists) that such individuals are epistemically justified in their belief. It is also, least of mention, less vulnerable to internalist counter-examples of the sort and since the intuitions involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge is supposed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and mos t troubling versions of scepticism, which seem in fact to be primarily concerned with justification rather than knowledge?

A rather different use of the terms ‘internalism’ and ‘externalism’ has to do with the issue of how the content of beliefs and thoughts is determined: According to an internalist view of content, the content of such intentional states depends only on the non-relational, internal properties of the individual’s mind or brain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors. Here too a view that appeals to both internal and external elements is standardly classified as an externalist view.

As with justification and knowledge, the traditional view of content has been strongly internalist character. The main argument for externalism derives from the philosophy of language, more specifically from the various phenomena pertaining to natural kind terms, indexical, and so forth, that motivate the views that have come to be known as ‘direct reference’ theories. Such phenomena seem at least to show that the belief or thought content that can e properly attributed to a person is dependent on facts about his environment - e.g., whether he is on Earth or Twin Earth, what in fact he is pointing at, the classificatory criteria employed by the experts in his social group, etc. - not just on what is going on internally in his mind or brain.

An objection to externalist accounts of content is that they seem unable to do justice to our ability to know the contents of our beliefs or thoughts ‘from the inside’, simply by reflection. If content is dependent of external factors pertaining to the environment, then knowledge of content should depend on knowledge of the these factors - which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification in the following way: If part of all of the content of a belief inaccessible to the believer, then both the justifying status of other beliefs in relation to the content and the status of that content as justifying further beliefs will be similarly inaccessible, thus contravening the internalist must insist that there are no rustication relations of these sorts, that only internally accessible content can either be justified or justify anything else: By such a response appears lame unless it is coupled with an attempt to shows that the externalists account of content is mistaken.

To have a word or a picture, or any other object in one’s mind seems to be one thing, but to understand it is quite another. A major target of the later Ludwig Wittgenstein (1889-1951) is the suggestion that this understanding is achieved by a further presence, so that words might be understood if they are accompanied by ideas, for example. Wittgenstein insists that the extra presence merely raises the same kind of problem again. The better of suggestions in that understanding is to be thought of as possession of a technique, or skill, and this is the point of the slogan that ‘meaning is use’, the idea is congenital to ‘pragmatism’ and hostile to ineffable and incommunicable understandings.

Whatever it is that makes what would otherwise be mere sounds and inscriptions into instruments of communication and understanding. The philosophical problem is to demystify this power, and to relate it to what wee know of ourselves and the world. Contributions to this study include the theory of speech acts and the investigation of commonisation and the relationship between words and ideas, sand words and the world.

The most influential idea I e theory of meaning I the past hundred years is the thesis that the meaning of an indicative sentence is given by its truth-condition. On this conception, to understand a sentence is to know its truth-conditions. The conception was first clearly formulated by the German mathematician and philosopher of mathematics Gottlob Frége (1848-1925), then was developed in a distinctive way by the early Wittgenstein, and is as leading idea of the American philosopher Donald Herbert Davidson. (1917-2003). The conception has remained so central that those who offer opposing theories characteristically define their position by reference to it.

The conception of meaning as truth-conditions need not and should not be advanced as being in itself a complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts conventionally performed by the various types of sentences in the language, and must have some ideate significance of speech act, the claim of the theorist of truth-conditions should rather be targeted on the notion of content: If two indicative sentences differ in what they strictly and literally say, then this difference is fully accounted for by the difference in their truth-conditions. It is this claim and its attendant problems, which will be the concern of each in the following.

The meaning of a complex expression is a function of the meaning of its constituents. This is indeed just a statement of what it is for an expression to be semantically complex. It is one of the initial attractions of the conception of meaning as truth-conditions that it permits a smooth and satisfying account of the ay in which the meaning of a complex expression is a function of the meaning its constituents. On the truth-conditional conception, to give the meaning of sn expressions is the contribution it makes to the truth-conditions of sentence in which it occur. For example terms - proper names, indexical, and certain pronouns - this is done by stating the reference of the term in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating the conditions under which arbitrary atomic sentences containing it true. The meaning of a sentence-forming operators as given by stating its contribution to the truth-conditions of a complex sentence, as function of the semantic values of the sentence on which it operates. For an extremely simple, but nevertheless structured language, er can state that contributions various expressions make to truth condition, are such as:

A1: The referent of ‘London ‘ is London.

A2: The referent of ‘Paris’ is Paris

A3: Any sentence of the form ‘a is beautiful’ is true if and only if the referent of ‘a’ is beautiful.

A4: Any sentence of the form ‘a is lager than b’ is true if and only if the referent of ‘a’ is larger than referent of ‘b’.

A5: Any sentence of t he for m ‘its no t the case that ‘A’ is true if and only if it is not the case that ‘A’ is true .

A6: Any sentence of the form ‘A and B’ is true if and only if ‘A’ is true and ‘B’ is true.

The principles A1-A6 form a simple theory of truth for a fragment of English. In this the or it is possible to derive these consequences: That ‘Paris is beautiful’ is true if and only if Paris is beautiful, is true and only if Paris is beautiful (from A2 and A3): That ‘London is larger than Paris and it is not the case that London is beautiful, is true if and only if London is larger than Paris and it is not the case that London is beautiful (from A1-A5),and in general, for any sentence ‘A’, this simple language we can derive something of the form ‘A’ is true if and only if ‘A’ .

Yet, theorist of truth conditions should insist that not ever y true statement about the reference o f an expression is fit to be an axiom in a meaning-giving theory of truth for a language. The axiom‘London’ refers to the ct in which there was a huge fire in 1666.

This is a true statement about the reference of ‘London’. It is a consequence of a theory which substitutes tis axiom for A1 in our simple truth theory that ‘London is beautiful’ is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a subject can understand thee name ‘London; without knowing that the last-mentioned truth condition, this replacement axiom is not fit to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorist of meaning as truth conditions to state the constraints on the acceptability of axioms in a way which does not presuppose any prior, truth-conditional conception of meaning.

Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental, firs t, the theorist has to answer the charge of triviality or vacuity. Second, the theorist must offer an account of what it is fir a person’s language to truly describable by a semantic theory containing a given semantic axiom.

What can take the charge of triviality first. In more detail, it would run thus: since the content of a claim that the sentence ‘Paris is beautiful’ is true amounts to no more than the claim that Paris is beautiful, we can trivially describe understanding a sentence, if we wish, as knowing its truth-conditions. But this gives us no substantive account of understanding whatsoever. Something other than grasp of truth conditions must provide the substantive account. The charge tests upon what has been called the ‘redundancy theory of truth’, the theory also known as ‘minimalism’. Or the ‘deflationary’ view of truth, fathered by the German mathematician and philosopher of mathematics, had begun with Gottlob Frége (1848-1925), and the Cambridge mathematician and philosopher Plumton Frank Ramsey (1903-30). Wherefore, the essential claim is that the predicate’ . . . is true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophical enquiry. The approach admits of different versions, nit centres on the points that ‘it is true that p’ says no more nor less than ‘p’(hence redundancy): That in less direct context, such as ‘everything he said was true’. Or ‘all logical consequences are true’. The predicate functions as a device enabling us to generalize rather than as an adjective or predicate describing the things he said or the kinds f propositions that follow from true propositions. For example: ‘(∀p, q)(p & p ➞ q ➞ q)’ where there is no use of a notion of truth.

There are technical problems in interpreting all uses of the notion of truth in such ways, but they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive users of the notion, such as ‘science aims at the truth’ or ‘truth is a normative governing discourse’. Indeed, postmodernist writing frequently advocates that we must abandon such norms, along with a discredited ‘objectivity’ conception of truth. But, perhaps, we can have the norm even when objectivity is problematic, since they can be framed without mention of truth: Science wants it to be so that whenever science holds that ‘p’, then ‘p’, discourse is to be regulated by the principle that it is wrong to assert ‘p’ when

not-p.

It is, nonetheless, that we can take charge of triviality, since the content of a claim ht the sentence ‘Paris is beautiful’ is true, amounting to no more than the claim that Paris is beautiful, we can trivially describe understanding a sentence. If we wish, as knowing its truth-condition, but this gives us no substitute account of understanding whatsoever. Something other than grasp of truth conditions must provide the substantive account. The charge rests on or upon what has been the redundancy theory of truth. The minimal theory states that the concept of truth is exhaustively by the fact that it conforms to the equivalence principle, the principle that for any proposition ‘p’, it is true that ‘p’ if and only if ‘p’. Many different philosophical theories, accept that e equivalence principle, as e distinguishing feature of the minimal theory, its claim that the equivalence principle exhausts the notion of truth. It is, however, widely accepted, both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both the minimal theory of truth and a truth conditional account of meaning. If the claim that the sentence ‘Paris is beautiful, it is circular to try to explain the sentence’s meaning in terms of its truth condition. The minimal theory of truth has been endorsed by Ramsey, Ayer, and later Wittgenstein, Quine, Strawson, Horwich and - confusingly and inconsistently of Frége himself.

The minimal theory treats instances of the equivalence principle as definitional truth for a given sentence. But in fact, it seems that each instance of the equivalence principle can itself be explained. The truths from which such an instance as

‘London is beautiful’ is true if and only if

London is beautiful

can be explained are precisely A1 and A3 in that, this would be a pseudo-explanation if the fact that ‘London’ refers to London consists in part in the fact that ‘London is beautiful’ has the truth-condition it does. But that is very implausible: It is, after all, possible to understand the name ‘London’ without understanding the predicate ‘is beautiful’. The idea that facts about the reference of particular words can be explanatory of facts about the truth conditions of sentences containing them in no way requires any naturalistic or any other kind of reduction of the notion of reference. Nor is the idea incompatible with the plausible point that singular reference can be attributed at all only to something which is capable of combining with other expressions to form complete sentences. That still leaves room for facts about an expression’s having the particular reference it does to be partially explanatory of the particular truth condition possessed by a given sentence containing it. The minimal theory thus treats as definitional or stimulative something which is in fact open to explanation. What makes this explanation possible is that there is a general notion of truth which has, among the many links which hold it in place, systematic connections with the semantic values of subsentential expressions.

A second problem with the minimal theory is that it seems impossible to formulate it without at some point relying implicitly on features and principles involving truth which go beyond anything countenanced by the minimal theory. If the minimal theory treats truth as a predicate of anything linguistic, be it utterances, type-in-a-language, or whatever. Then the equivalence schemata will not cover all cases, but only those in the theorist’s own language. Some account has to be given of truth for sentences of other languages. Speaking of the truth of language-independent propositions or thoughts will only post-pone, not avoid, this issue, since at some point principles have to be stated associating these language-dependent entities with sentences of particular languages. The defender of the minimalist theory is that the sentence ‘S’ of a foreign language is best translated by our sentence, then the foreign sentence ‘S’ is true if and only if ‘p’. Now the best translation of a sentence must preserve the concepts expressed in the sentence. Constraints involving a general notion of truth are pervasive plausible philosophical theory of concepts. It is, for example, a condition of adequacy on an individuating account of any concept that there exist what may be called a ‘Determination Theory’ for that account - that is, a specification on how the account contributes to fixing the semantic value of that concept. The notion of a concept’s semantic value is the notion of something which makes a certain contribution to the truth conditions of thoughts in which the concept occurs. But this is to presuppose, than to elucidate, a general notion of truth.

It is, also, plausible that there are general constraints on the form of such Determination Theories, constrains which involve truth and which are not derivable from the minimalist ‘s conception. Suppose that concepts are individuated by their possession condition. A possession condition may in various ways make a thinker’s possession of a particular concept dependent upon his relation to his environment. Many possession conditions will mention the links between accept and the thinker’s perceptual experience. Perceptual experience represents the world as being a certain way. It is arguable that the only satisfactory explanation to what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to the subject’s environment. If this is so, to mention of such experiences in a possession condition dependent in part upon the environmental relations of the thinker. Evan though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary in the thinker’s social environment is varied. A possession condition which properly individuates such a concept must take into account the thinker’s social relations, in particular his linguistic relations.

Its alternative approach, addresses the question by starting from the idea that a concept is individuated by the condition which must be satisfied a thinker is to posses that concept and to be capable of having beliefs and other altitudes whose content contain it as a constituent. So, to take a simple case, one could propose that the logical concept ‘and’ is individualized by this condition: It is the unique concept ‘C’ to posses which a thinker has to find these forms of inference compelling, without basting them on any further inference or information: From any two premises ‘A’ and ‘B’, ACB can be inferred and from any premise s a relatively observational concepts such as ;round’ can be individuated in part by stating that the thinker finds specified contents containing it compelling when he has certain kinds of perception, and in part by relating those judgements containing the concept and which are not based on perception to those judgements that are. A statement which individuates a concept by saying what is required for a thinker to posses it can be described as giving the possession condition for the concept.

A possession condition for a particular concept may actually make use of that concept. The possession condition for ‘and’ doers not. We can also expect to use relatively observational concepts in specifying the kind of experience which have to be mentioned in the possession conditions for relatively observational; concepts. What e must avoid is mention of the concept in question as such within the content of the attitude attributed to the thinker in the possession condition. Otherwise we would be presupposed possession of the concept in an account which was meant to elucidate its possession. In talking of what the thinker finds compelling, the possession conditions can also respect an insight of the later Wittgenstein: That a thinkers mastery of a concept is inextricably tied to how he finds it natural to go in new cases in applying the concept.

Sometimes a family of concepts has this property: It is not possible to master any one of the members of the family without mastering of the others.. Two of the families which plausibly have this status are these: The family consisting of same simple concepts 0, 1. 2, . . .of the natural numbers and the corresponding concepts of numerical quantifiers, ‘there are o so-and-so’s, there is 1 so-and- so’s, . . . and the family consisting of the concepts ‘belief’ and ‘desire’. Such families have come to be known as ‘local holist’s’. A local holism does not prevent the individuation of a concept by its possession condition. Rather, it demand that all the concepts in the family be individuated simultaneously. So one would say something of this form, belief and desire form the unique pair of concepts C1 and C2 such that for a thinker to posses them is to meet such-and-such condition involving the thinker, C1 and C2. For those other possession conditions to individuate properly. It is necessary that there be some ranking of the concepts treated. The possession condition or concepts higher in the ranking must presuppose only possession of concepts at the same or lower levels in the ranking.

A possession condition may in various ways make a thinker’s possession of a particular concept dependent on or upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinker’s perceptual experience. Perceptual experience represents the world as being a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to te subject’s environment. If this is so, then mention of such experiences in a possession condition will make possession f that concept relations tn the thicker. Burge (1979) has also argued from intuitions about particular examples that even though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary in the thinker’s social environment is varied. A possession condition which properly individuates such a concept must take into account the thinker’s social relations, in particular his linguistic relations.

Once, again, some general principles involving truth can, as Horwich has emphasized, be derived from the equivalence schemata using minimal logical apparatus. Consider, for instance, the principle that ‘Paris is beautiful and London is beautiful’ is true if and only if ‘Paris is beautiful’ is true and ‘London is beautiful’ is true if and only if Paris is beautiful and London is beautiful. But no logical manipulations of the equivalence e schemata will allow the derivation of that general constraint governing possession condition, truth and assignment of semantic values. That constraints can of course be regarded as a further elaboration of the idea that truth is one of the aims of judgement.

What is to a greater extent, but to consider the other question, for ‘What is it for a person’s language to be correctly describable by a semantic theory containing a particular axiom, such as the above axiom A6 for conjunctions? This question may be addressed at two depths of generality. A shallower of levels, in this question may take for granted the person’s possession of the concept of conjunction, and be concerned with what hast be true for the axiom to correctly describe his language. At a deeper level, an answer should not sidestep the issue of what it is to posses the concept. The answers to both questions are of great interest.

When a person means conjunction by ‘and’, he is not necessarily capable of formulating the axiom A6 explicitly. Even if he can formulate it, his ability to formulate it is not causal basis of his capacity to hear sentences containing the word ‘and’ as meaning something involving conjunction. Nor is it the causal basis of his capacity to mean something involving conjunction by sentences he utters containing the word ‘and’. Is it then right to regard a truth theory as part of an unconscious psychological computation, and to regard understanding a sentence as involving a particular way of deriving a theorem from a truth theory at some level of unconscious processing? One problem with this is that it is quite implausible that everyone who speaks exactly the same language has to use exactly the same algorithms for computing the meaning of a sentence. In the past thirteen years, the particular work as befitting Davies and Evans, whereby a conception has evolved according to which an axiom like A6, is true of a person’s component in the explanation of his understanding of each sentence containing the words ‘and’, a common component which explains why each such sentence is understood as meaning something involving conjunction. This conception can also be elaborated in computational; terms: As alike to the axiom A6 to be true of a person’s language is for the unconscious mechanism, which produce understanding to draw on the information that a sentence of the form ‘A and B’ is true only if ‘A’ is true and ‘B’ is true. Many different algorithms may equally draw on or open this information. The psychological reality of a semantic theory thus are to involve, Marr’s (1982) given by classification as something intermediate between his level one, the function computed, and his level two, the algorithm by which it is computed. This conception of the psychological reality of a semantic theory can also be applied to syntactic and phonological theories. Theories in semantics, syntax and phonology are not themselves required to specify the particular algorithm which the language user employs. The identification of the particular computational methods employed is a task for psychology. But semantic, syntactic and phonological theories are answerable to psychological data, and are potentially refutable by them - for these linguistic theories do make commitments to the information drawn on or upon by mechanisms in the language user.

This answer to the question of what it is for an axiom to be true of a person’s language clearly takes for granted the person’s possession of the concept expressed by the word treated by the axiom. In the example of the axiom A6, the information drawn upon is that sentences of the form ‘A and B’ are true if and only if ‘A’ is true and ‘B’ is true. This informational content employs, as it has to if it is to be adequate, the concept of conjunction used in stating the meaning of sentences containing ‘and’. S he computational answer we have returned needs further elaboration, which does not want to take for granted possession of the concepts expressed in the language. It is at this point that the theory of linguistic understanding has to argue that it has to draw upon a theory if the conditions for possessing a given concept. It is plausible that the concept of conjunction is individuated by the following condition for a thinker to have possession of it:



The concept ‘and’ is that concept ‘C’ to possess which a

thinker must meet the following conditions: He finds inferences

of the following forms compelling, does not find them

compelling as a result of any reasoning and finds them

compelling because they are of there forms:



pCq pCq pq



p q PCq



Here ‘p’ and ‘q’ range over complete propositional thoughts, not sentences. When axiom A6 is true of a person’s language, there is a global dovetailing between this possessional condition for the concept of conjunction and certain of his practices involving the word ‘and’. For the case of conjunction, the dovetailing involves at least this:

If the possession condition for conjunction entails that the

thinker who possesses the concept of conjunction must be

willing to make certain transitions involving the thought p&q,

and of the thinker’s semitrance ‘A’ means that ‘p’ and his

sentence ‘B’ means that ‘q’ then: The thinker must be willing

to make the corresponding linguistic transition involving

sentence ‘A and B’.

This is only part of what is involved in the required dovetailing. Given what wee have already said about the uniform explanation of the understanding of the various occurrences of a given word, we should also add, that there is a uniform (unconscious, computational) explanation of the language user’s willingness to make the corresponding transitions involving the sentence ‘A and B’.

This dovetailing account returns an answer to the deeper questions because neither the possession condition for conjunction, nor the dovetailing condition which builds upon the dovetailing condition which builds on or upon that possession condition, takes for granted the thinker’s possession of the concept expressed by ‘and’. The dovetailing account for conjunction is an instance of a more general; schemata, which can be applied to any concept. The case of conjunction is of course, exceptionally simple in several respects. Possession conditions for other concepts will speak not just of inferential transitions, but of certain conditions in which beliefs involving the concept in question are accepted or rejected, and the corresponding dovetailing condition will inherit these features. This dovetailing account has also to be underpinned by a general rationale linking contributions to truth conditions with the particular possession condition proposed for concepts. It is part of the task of the theory of concepts to supply this in developing Determination Theories for particular concepts.

In some cases, a relatively clear account is possible of how a concept can feature in thoughts which may be true though unverifiable. The possession condition for the quantificational concept all natural numbers can in outline run thus: This quantifier is that concept Cx . . . x . . .to posses which the thinker has to find any inference of the form



CxFx



Fn



Compelling, where ‘n’ is a concept of a natural number, and does not have to find anything else essentially containing Cx . . .x . . . compelling. The straightforward Determination Theory for this possession condition is one on which the truth of such a thought CxFx is true only if all natural numbers are ‘F’. That all natural numbers are ‘F’ is a condition which can hold without our being able to establish that it holds. So an axiom of a truth theory which dovetails with this possession condition for universal quantification over the natural numbers will b component of a realistic, non-verifications theory of truth conditions.

Finally, this response to the deeper questions allows us to answer two challenges to the conception of meaning as truth-conditions. First, there was the question left hanging earlier, of how the theorist of truth-conditions is to say what makes one axiom of a semantic theory correct rather than another, when the two axioms assigned the same semantic values, but do so by different concepts. Since the different concepts will have different possession conditions, the dovetailing accounts, at the deeper level, of what it is for each axiom to be correct for a person’s language will be different accounts. Second, there is a challenge repeatedly made by the minimalist theories of truth, to the effect that the theorist of meaning as truth-conditions should give some non-circular account of what it is to understand a sentence, or to be capable of understanding all sentences containing a given constituent. For each expression in a sentence, the corresponding dovetailing account, together with the possession condition, supplies a non-circular account of what it is to that expression. The combined accounts for each of the expressions which comprise a given sentence together constitute a non-circular account of what it is to understand the complete sentence. Taken together, they allow theorist of meaning as truth-conditions fully to meet the challenge.

A widely discussed idea is that for a subject to be in a certain set of content-involving states, for attribution of those state s to make the subject as rationally intelligible. Perceptions make it rational fo r a person to form corresponding beliefs. Beliefs make it rational to draw certain inference s. belief and desire make rational the formation of particular intentions, and the performance e of the appropriate actions. People are frequently irrational of course, bu t a governing ideal of this approach is that for any family of contents, there is some minimal core of rational transitions to or from states involving them, a core that a person must respect of his states are to be attributed with those contents at all. We contrast what we wan do with what we must do - whether for reasons of morality or duty, or even for reasons of practical necessity (to get what we wanted in the first place). Accordingly, our own desires have seemed to be the principal actions that most fully express our own individual natures and will, and those for which we are personally mos t responsible. But desire has also seemed t o be a principle of action contrary to and at war with our better natures, as rational and or agents. For it is principally from our own differing perspectives upon what would be good, that each of us wants what he does, each point of view being defined by one’s own interests ans pleasure. In this, the representations of desire are like those of sensory perception, similarly shaped by the perspective of the perceiver and the idiosyncrasies of the perceptual dialectic about desire and its object recapitulates that of perception ad sensible qualities. The strength of desire, for instance, varies with the state of the subject more or less independently of the character, an the actual utility, of the object wanted. Such facts cast doubt on the ‘objectivity’ of desire, and on the existence of a correlatives property of ‘goodness’, inherent in the objects of our desires, and independent of them. Perhaps, as the Dutch Jewish rationalist (1632-77) Benedictus de Spinoza put it, it is not that we want what we think good, but that we think good what we happen to want - the ‘good’ in what we want being a mere shadow cast by the desire for it. (There is a parallel Protagorean view of belief, similar ly sceptical of truth). The serious defence of such a view, however, would require a systematic reduction of apparent facts about goodness to fats about desire, and an analysis of desire which in turn makes no reference to goodness. While what is yet to be provided, moral psychologists have sought to vindicate an idea of objective goodness. For example, as what would be good from all points of view, or none, or, in the manner of the German philosopher Immanuel Kant, to establish another principle (the will or practical reason) conceived as an autonomous source of action, independent of desire or its object: And tis tradition has tended to minimize the role of desire in the genesis of action.

Ascribing states with content on actual person has to proceed simultaneously with attributions of as wide range of non-rational states and capacities. In general, we cannot understand a persons reasons for acting as he does without knowing the array of emotions and sensations to which he is subject: What he remembers and what he forgets, and how he reasons beyond the confines o minimal rationality. Even the content-involving perceptual states, which play a fundamental role in individuating content, cannot be understood purely in terms relating to minimal rationality. A perception of the world as being a certain way is not (and could not be) under a subject’s rational control. Thought it is true and important that perceptions give reason for forming beliefs, the beliefs for which they fundamentally provide reasons - observational beliefs about the environment - have contents which can only be elucidated by referring back to perceptual experience. In this respect (as in others), perceptual states differ from beliefs and desires that are individuated by mentioning what they provide reasons for judging or doing: or frequently these latter judgements and actions can be individuated without reference back to the states that provide for them.

What is the significance for theories of content of the fact that it is almost certainly adaptive for members of as species to have a system of states with representational contents which are capable of influencing their actions appropriately? According to teleological theories a content, a constitutive account of content - one which says what it is for a state to have a given content - must make user of the notion of natural function and teleology. The intuitive idea is that for a belief state to have a given content ‘p’ is for the belief-forming mechanisms which produced it to have the unction as, perhaps, the derivatively of producing that stare only when it is the case that ‘p’. One issue this approach must tackle is whether it is really capable of associating with states the classical, realistic, verification-transcendent contents which, pre-theoretically, we attribute to them. It is not clear that a content’s holding unknowably can influence the replication of belief-forming mechanisms. But if content itself proves to resist elucidation, it is still a very natural function and selection. It is still a very attractive view, that selection, it is still a very attractive view, that selection must be mentioned in an account of what associates something - such as aa sentence - wi a particular content, even though that content itself may be individuated by other means.

Contents are normally specified by ‘that . . .’ clauses, and it is natural to suppose that a content has the same kind of sequence and hierarchical structure as the sentence that specifies it. This supposition would be widely accepted for conceptual content. It is, however, a substantive thesis that all content is conceptual. One way of treating one sort of ‘perceptual content’ is to regard the content as determined by a spatial type, the type under which the region of space around the perceiver must fall if the experience with that content is to represent the environment correctly. The type involves a specification of surfaces and features in the environment, and their distances and directions from the perceiver’s body as origin, such contents lack any sentence-like structure at all. Supporters of the view that all content is conceptual will argue that the legitimacy of using these spatial types in giving the content of experience does not undermine the thesis that all content is conceptual. Such supporters will say that the spatial type is just a way of capturing what can equally be captured by conceptual components such as ‘that distance’, or ‘that direction’, where these demonstratives are made available by the perception in question. Friends of conceptual content will respond that these demonstratives themselves cannot be elucidated without mentioning the spatial type which lack sentence-like structure.

Content-involving states are actions individuated in party reference to the agent’s relations to things and properties in his environment. Wanting to see a particular movie and believing that the building over there is a cinema showing it makes rational the action of walking in the direction of that building.

However, in the general philosophy of mind, and more recently, desire has received new attention from those who understand mental states in terms of their causal or functional role in their determination of rational behaviour, and in particular from philosophers trying to understand the semantic content or intentional; character of mental states in those terms as ‘functionalism’, which attributes for the functionalist who thinks of mental states and evens asa causally mediating between a subject’s sensory inputs and that subject’s ensuing behaviour. Functionalism itself is the stronger doctrine that makes a mental state the type of state it is - in pain, a smell of violets, a belief that the koala (an arboreal Australian marsupial (Phascolarctos cinereus), is dangerous - is the functional relation it bears to the subject’s perceptual stimuli, behavioural responses, and other mental states.

In the general philosophy of mind, and more recently, desire has received new attention from those who would understand mental stat n terms of their causal or functional role in the determination of rational behaviour, and in particularly from philosophers trying to understand the semantic content or the intentionality of mental states in those terms.

Conceptual (sometimes computational, cognitive, causal or functional) role semantics (CRS) entered philosophy through the philosophy of language, not the philosophy of mind. The core idea behind the conceptual role of semantics in the philosophy of language is that the way linguistic expressions are related to one another determines what the expressions in the language mean. There is a considerable affinity between the conceptual role of semantics and structuralist semiotics that has been influence in linguistics. According to the latter, languages are to be viewed as systems of differences: The basic idea is that the semantic force (or, ‘value’) of an utterance is determined by its position in the space of possibilities that one’ language offers. Conceptual role semantics also has affinities with what the artificial intelligence researchers call ‘procedural semantics’, the essential idea here is that providing a compiler for a language is equivalent to specifying a semantic theory of procedures that a computer is instructed to execute by a program.

Nevertheless, according to the conceptual role of semantics, the meaning of a thought I determined by the though’s role in a system of states, to specify a thought is not to specify its truth or referential condition, but to specify its role. Walter’s and twin-Walter’s thoughts, though different truth and referential conditions, share the same conceptual role, and it is by virtue of this commonality that they behave type-identically. If Water and twin-Walter each has a belief that he would express by ‘water quenches thirst’ the conceptual role of semantics can explained predict their dripping their cans into H2O and XYZ respectfully. Thus the conceptual role of semantics would seem (though not to Jerry Fodor, who rejects of the conceptual role of semantics for both external and internal problems.

Nonetheless, if, as Fodor contents, thoughts have recombinable linguistic ingredients, then, of course, for the conceptual role of semantic theorist, questions arise about the role of expressions in the language of thought as well as in the public language we speak and write. And, according, the conceptual role of semantic theorbists divide not only over their aim, but also about conceptual roles in semantic’s proper domain. Two questions avail themselves. Some hold that public meaning is somehow derivative (or inherited) from an internal mental language (mentalese) and that a mentalese expression has autonomous meaning (partly). So, for example, the inscriptions on this page require for their understanding translation, or at least, transliterations. Into the language of thought: representations in the brain require no such translation or transliteration. Others hold that the language of thought just is public language internalized and that it is expressions (or primary) meaning in virtue of their conceptual role.

After one decides upon the aims and the proper province of the conceptual role for semantics, the relations among expressions - public or mental - constitute their conceptual roles. Because most conceptual roles of semantics as theorists leave the notion of the role in conceptuality as a blank cheque, the options are open-ended. The conceptual role of a [mental] expression might be its causal association: Any disposition to token or example, utter or think on the expression ‘ℯ’ when tokening another ‘ℯ’ or ‘a’ an ordered n-tuple < ℯ’ ℯ’‘, . . >, or vice versa, can count as the conceptual role of ‘ℯ’. A more common option is characterized conceptual role not causally but inferentially (these need not incompatible, contingent upon one’s attitude about the naturalization of inference): The conceptual role of an expression ‘ℯ’ in ‘L’ might consist of the set of actual and potential inferences from ‘ℯ’, or, as a more common, the ordered pair consisting of these two sets. Or, if it is sentences which have non-derived inferential roles, what would it mean to talk of the inferential role of words? Some have found it natural to think of the inferential role of as words, as represented by the set of inferential roles of the sentence in which the word appears.

The expectation of expecting that one sort of thing could serve all these tasks went hand in hand with what has come to be called the ‘Classical View’ of concepts, according to which they had an ‘analysis’ consisting of conditions that are individually necessary and jointly sufficient for their satisfaction, which are known to any competent user of them. The standard example is the especially simple one of [bachelor], which seems to be identical to [eligible unmarried male]. A more interesting, but analysis was traditionally thought to be [justified true belief].

This Classical View seems to offer an illuminating answer to a certain form of metaphysical question: In virtue of what is something the kind of thing it is - i.e., in virtue of what is a bachelor a bachelor? - and it does so in a way that supports counter-factual: It tells us what would satisfy the conception situations other than the actual ones (although all actual bachelors might turn out to be freckled, its possible that there might be unfreckled ones, since the analysis does not exclude that). The view also seems to offer an answer to an epistemological question of how people seem to know a priori (or independently of experience) about the nature of many things, e.g., that bachelors are unmarried: It is constitutive of the competency (or possession) conditions of a concept that they know its analysis, at least on reflection.

The Classic View, however, has alway ss had to face the difficulty of primitive concepts: Its all well and good to claim that competence consists in some sort of mastery of a definition, but what about the primitive concept in which a process of definition mus t ultimately end: Here the British Empiricism of the seventeenth century began to offer a solution: All the primitives were sensory, indeed, they expanded the Classical View to include the claim, now often taken uncritically for granted in the discussions of that view, that all concepts are ‘derived from experience’:’Every idea is derived from a corresponding impression’, in the work of John Locke (1632-1704), George Berkeley (1685-1753) and David Hume (1711-76) were often thought to mean that concepts were somehow composed of introspectible mental items - ‘images’, ‘impressions’ - that were ultimately decomposable into basic sensory parts. Thus, Hume analysed the concept of [material object] as involving certain regularities in our sensory experience and [cause] as involving spatio-temporal contiguity ad constant conjunction.

The Irish ‘idealist’ George Berkeley, noticed a problem with this approach that every generation has had to rediscover: If a concept is a sensory impression, like an image, then how does one distinguish a general concept [triangle] from a more particular one - say, [isosceles triangle] - that would serve in imagining the general one. More recently, Wittgenstein (1953) called attention to the multiple ambiguity of images. And in any case, images seem quite hopeless for capturing the concepts associated with logical terms (what is the image for negation or possibility?) What ever the role of such representation, full conceptual competency must involve something more.

Conscionably, in addition to images and impressions and other sensory items, a full account of concepts needs to consider is of logical structure. This is precisely what the logical positivist did, focussing on logically structured sentences instead of sensations and images, transforming the empiricist claim into the famous ‘Verifiability Theory of Meaning’, the meaning of s sentence is the means by which it is confirmed or refuted, ultimately by sensory experience the meaning or concept associated with a predicate is the means by which people confirm or refute whether something satisfies it.

This once-popular position has come under much attack in philosophy in the last fifty years, in the first place, few, if any, successful ‘reductions’ of ordinary concepts (like [material objects] [cause] to purely sensory concepts have ever been achieved. Our concept of material object and causation seem to go far beyond mere sensory experience, just as our concepts in a highly theoretical science seem to go far beyond the often only meagre evidence we can adduce for them.

The American philosopher of mind Jerry Alan Fodor and LePore (1992) have recently argued that the arguments for meaning holism are, however less than compelling, and that there are important theoretical reasons for holding out for an entirely atomistic account of concepts. On this view, concepts have no ‘analyses’ whatsoever: They are simply ways in which people are directly related to individual properties in the world, which might obtain for someone, for one concept but not for any other: In principle, someone might have the concept [bachelor] and no other concepts at all, much less any ‘analysis’ of it. Such a view goes hand in hand with Fodor’s rejection of not only verificationist, but any empiricist account of concept learning and construction: Given the failure of empiricist construction. Fodor (1975, 1979) notoriously argued that concepts are not constructed or ‘derived’ from experience at all, but are and nearly enough as they are all innate.

The deliberating considerations about whether there are innate ideas is much as it is old, it, nonetheless, takes from Plato (429-347 Bc) in the ‘Meno’ the problems to which the doctrine of ‘anamnesis’ is an answer in Plato’s dialogue. If we do not understand something, then we cannot set about learning it, since we do not know enough to know how to begin. Teachers also come across the problem in the shape of students, who can not understand why their work deserves lower marks than that of others. The worry is echoed in philosophies of language that see the infant as a ‘little linguist’, having to translate their environmental surroundings and grasp on or upon the upcoming language. The language of thought hypothesis was especially associated with Fodor, that mental processing occurs in a language different from one’s ordinary native language, but underlying and explaining our competence with it. The idea is a development of the Chomskyan notion of an innate universal grammar. It is a way of drawing the analogy between the workings of the brain or mind and those of the standard computer, since computer programs are linguistically complex sets of instruments whose execution explains the surface behaviour of computer. As an explanation of ordinary language has not found universal favour. It apparently only explains ordinary representational powers by invoking innate things of the same sort, and it invites the image of the learning infant translating the language whose own powers are a mysterious a biological given.

René Descartes (1596-1650) and Gottfried Wilhelm Leibniz (1646-1716), defended the view that mind contains innate ideas: Berkeley, Hume and Locke attacked it. In fact, as we now conceive the great debate between European Rationalism and British Empiricism in the seventeenth and eighteenth centuries, the doctrine of innate ideas is a central bone of contention: Rationalist typically claim that knowledge is impossible without a significant stoke of general innate concepts or judgements: Empiricist argued that all ideas are acquired from experience. This debate is replayed with more empirical content and with considerably greater conceptual complexity in contemporary cognitive science, most particularly within the domain of psycholinguistic theory and cognitive developmental theory.

Some of the philosophers may be cognitive scientist others concern themselves with the philosophy of cognitive psychology and cognitive science. Since the inauguration of cognitive science these disciplines have attracted much attention from certain philosophes of mind. The attitudes of these philosophers and their reception by psychologists vary considerably. Many cognitive psychologists have little interest in philosophical issues. Cognitive scientists are, in general, more receptive.

Fodor, because of his early involvement in sentence processing research, is taken seriously by many psycholinguists. His modularity thesis is directly relevant to question about the interplay of different types of knowledge in language understanding. His innateness hypothesis, however, is generally regarded as unhelpful,. And his prescription that cognitive psychology is primarily about propositional attitudes is widely ignored. The American philosopher of mind, Daniel Clement Dennett (1942- )whose recent work on consciousness treats a topic that is highly controversial, but his detailed discussion of psychological research finding has enhanced his credibility among psychologists. In general, however, psychologists are happy to get on with their work without philosophers telling them about their ‘mistakes’.

Connectionmism has provided a somewhat different reaction mg philosophers. Some - mainly those who, for other reasons, were disenchanted with traditional artificial intelligence research - have welcomed this new approach to understanding brain and behaviour. They have used the success, apparently or otherwise, of connectionist research, to bolster their arguments for a particular approach to explaining behaviour. Whether this neuro-philosophy will eventually be widely accepted is a different question. One of its main dangers is succumbing to a form of reductionism that most cognitive scientists and many philosophers of mind, find incoherent.

One must be careful not to caricature the debate. It is too easy to see the debate as one pitting innatists, who argue that all concepts of all of linguistic knowledge is innate (and certain remarks of Fodor and of Chomsky lead themselves in this interpretation) against empiricist who argue that there is no innate cognitive structure in which one need appeal in explaining the acquisition of language or the facts of cognitive development (an extreme reading of the American philosopher Hilary Putnam1926-). But this debate would be a silly and a sterile debate indeed. For obviously, something is innate. Brains are innate. And the structure of the brain must constrain the nature of cognitive and linguistic development to some degree. Equally obvious, something is learned and is learned as opposed to merely grown as limbs or hair growth. For not all of the world’s citizens end up speaking English, or knowing the Relativity Theory. The interesting questions then all concern exactly what is innate, to what degree it counts as knowledge, and what is learned and to what degree its content and structure are determined by innately specified cognitive structure. And that is plenty to debate about.

The arena in which the innateness takes place has been prosecuted with the greatest vigour is that of language acquisition, and it is an appropriate to begin there. But it will be extended to the domain of general knowledge and reasoning abilities through the investigation of the development of object constancy - the disposition to concept of physical objects as persistent when unobserved and to reason about there properties locations when they are not perceptible.

The most prominent exponent of the innateness hypothesis in the domain of language acquisition is Chomsky (1296, 1975). His research and that of his colleagues and students is responsible for developing the influence and powerful framework of transformational grammar that dominates current linguistic and psycholinguistic theory. This body of research has amply demonstrated that the grammar of any human language is a highly systematic, abstract structure and that there are certain basic structural features shared by the grammars of all human language s, collectively called ‘universal grammar’. Variations among the specific grammars of the world’s ln languages can be seen as reflecting different settings of a small number of parameters that can, within the constraints of universal grammar, take may of several different valued. All of type principal arguments for the innateness hypothesis in linguistic theory on this central insight about grammars. The principal arguments are these: (1) The argument from the existence of linguistic universals, (2) the argument from patterns of grammatical errors in early language learners: (3) The poverty of the stimulus argument, (4) the argument from the case of fist language learning (5) the argument from the relative independence of language learning and general intelligence, and (6) The argument from the moduarity of linguistic processing.

Innatists argue (chomsky 1966, 1975) that the very presence of linguistic universals argue for the innateness of linguistic of linguistic knowledge, but more importantly and more compelling that the fact that these universals are, from the standpoint of communicative efficiency, or from the standpoint of any plausible simplicity reflectively adventitious. These are many conceivable grammars, and those determined by universal grammars, and those determined by universal grammar are not ipso facto the most efficient or the simplest. Nonetheless, all human languages satisfy the constraints of universal grammar. Since either the communicative environment nor the communicative tasks can explain this phenomenon. It is reasonable to suppose that it is explained by the structures of the mind - and therefore, by the fact that the principles of universal grammar lie innate in the mind and constrain the language that a human can acquire.

Hilary Putnam argues, by appeal to a common-sens e ancestral language by its descendants. Or it might turn out that despite the lack of direct evidence at present the feature of universal grammar in fact do serve either the goals of commutative efficacy or simplicity according in a metric of psychological importance. finally, empiricist point out, the very existence of universal grammar might be a trivial logical artefact: For one thing, many finite sets of structure es whether some features in common. Since there are some finite numbers of languages, it follows trivial that there are features they all share. Moreover, it is argued, many features of universal grammar are interdependent. On one , in fact, the set of fundamentally the same mental principle shared by the world’s languages may be rather small. Hence, even if these are innately determined, the amount not of innate knowledge thereby, required may be quite small as compared with the total corpus of general linguistic knowledge acquired by the first language learner.

These relies are rendered less plausible, innatists argue, when one considers the fact that the errors language learners make in acquiring their first language seem to be driven far more by abstract features of gramma r than by any available input data. So, despite receiving correct examples of irregular plurals or past-tense forms for verbs, and despite having correctly formed the irregular forms for those words, children will often incorrectly regularize irregular verbs once acquiring mastery of the rule governing regulars in their language. And in general, not only the correct inductions of linguistic rules by young language learners but more importantly, given the absence of confirmatory data and the presence of refuting data, children’s erroneous inductions e always consistent with universal gramma r, oftentimes simply representing the incorrect setting of a parameter in the grammar. More generally, innatists argue (Chomsky 1966,197 & Crain, 1991) all grammatical rules that have ever been observed satisfy the structure-dependence constraint. That is, many linguistics and psycholinguistics argue that all known grammatical rules of all of the world’s languages, including the fragmentary languages of young children must be started as rules governing hierarchical sentence structure, and not governing, say, sequence of words. Many of these, such as the constituent-command constraint governing anaphor, are highly abstract indeed, and appear to be respected by even very young children. Such constrain may, innatists argue, be necessary conditions of learning natural language in the absence of specific instruction, modelling and correct, conditions in which all first language learners acquire their native language.

Ann important empiricist rely to these observations derives from recent studies of ‘conceptionist’ models of first language acquisition, for which of a ‘connection system’, not previously trained to represent any subset universal grammar that induce grammar which include a large set of regular forms and a few irregulars also tend to over-regularize, exhibiting the same U-shape learning curve seen in human language acquire learning systems that induce grammatical systems acquire ‘accidental’ rules on which they are not explicitly trained but which are not explicit with those upon which they are trained, suggesting, tha t as children acquire portions of their grammar, they may accidentally ‘learn’ correct consistent rules, which may be correct in human languages, but which then must be ‘unlearned’ in their home language. On the other hand, such ‘empiricist’ language acquisition systems have yet to demonstrate their ability to induce a sufficient wide range of the rules hypothesize to be comprised by universal grammar to constitute a definitive empirical argument for the possibility of natural language acquisition in the absence of a powerful set of innate constraints.

The poverty of the stimulus argument has been of enormous influence in innateness debates, though its soundness is hotly contested. Chomsky notes that (1) the examples of their targe t language to which the language learner is exposed are always jointly compatible with an infinite number of alterative grammars, and so vastly under-determine the grammar of the language, and (2) The corpus always contains many examples of ungrammatical sentences, which should in fact serve as falsifiers of any empirically induced correct grammar of the language, and (3) there is, in general, no explicit reinforcement of correct utterances or correction of incorrect utterances, either byte learner or by those in the immediate training environment. Therefore, he argues, since it is impossible to explain the learning of the correct grammar - a task accomplished b all normal children within a very few years - on the basis of any available data or known learning algorithms, it must be ta the grammar is innately specified, and is merely ‘triggered’ by relevant environmental cues.

Opponents of the linguistic innateness hypothesis, however, point out that the circumstance that the American linguistic, philosopher and political activist, Noam Avram Chomsky (1929-), who believes that the speed with which children master their native language cannot be explained by learning theory, but requires acknowledging an innate disposition of the mind, an unlearned, innate and universal grammar, suppling the kinds of rule that the child will a priori understand to be embodied in examples of speech with which it is confronted in computational terms, unless the child came bundled with the right kind of software. It cold not catch on to the grammar of language as it in fact does.

As it is wee known from arguments due to the Scottish philosopher David Hume (1978, the Austrian philosopher Ludwig Wittgenstein (1953), the American philosopher Nelson Goodman ()1972) and the American logician and philosopher Aaron Saul Kripke (1982), that in all cases of empirical abduction, and of training in the use of a word, data underdetermining the theories. Th is moral is emphasized by the American philosopher Willard van Orman Quine (1954, 1960) as the principle of the undetermined theory by data. But we, nonetheless, do abduce adequate theories in silence, and we do learn the meaning of words. And it could be bizarre to suggest that all correct scientific theories or the facts of lexical semantics are innate.

But, innatists rely, when the empiricist relies on the underdermination of theory by data as a counter-example, a significant disanalogy with language acquisition is ignored: The abduction of scientific theories is a difficult, labourious process, taking a sophisticated theorist a great deal of time and deliberated effort. First language acquisition, by contrast, is accomplished effortlessly and very quickly by a small child. The enormous relative ease with which such a complex and abstract domain is mastered by such a naïve ‘theorist’ is evidence for the innateness of the knowledge achieved.

Empiricist such as the American philosopher Hilary Putnam (1926-) have rejoined that innatists under-estimate the amount of time that language learning actually takes, focussing only on the number of years from the apparent onset of acquisition to the achievement of relative mastery over the grammar. Instead of noting how short this interval, they argue, one should count the total number of hours spent listening to language and speaking during h time. That number is in fact quite large and is comparable to the number of hours of study and practice required the acquisition of skills that are not argued to derive from innate structures, such as chess playing or musical composition. Hence, they are taken into consideration, language learning looks like one more case of human skill acquisition than like a special unfolding of innate knowledge.

Innatists, however, note that while the case with which most such skills are acquired depends on general intelligence, language is learned with roughly equal speed, and to roughly the same level of general intelligence. In fact even significantly retarded individuals, assuming special language deficit, acquire their native language on a tine-scale and to a degree comparable to that of normally intelligent children. The language acquisition faculty, hence, appears to allow access to a sophisticated body of knowledge independent of the sophistication of the general knowledge of the language learner.

Empiricist’s reply that this argument ignores the centrality of language in a wide range of human activities and consequently the enormous attention paid to language acquisition by retarded youngsters and their parents or caretakers. They argue as well, that innatists overstate the parity in linguistic competence between retarded children and children of normal intelligence.

Innatists point out that the ‘modularity’ of language processing is a powerful argument for the innateness of the language faculty. There is a large body of evidence, innatists argue, for the claim that the processes that subserve the acquisition, understanding and production of language are quite distinct and independent of those that subserve general cognition and learning. That is to say, that language learning and language processing mechanisms and the knowledge they embody are domain specific - grammar and grammatical learning and utilization mechanisms are not used outside of language processing. They are informationally encapsulated - only linguistic information is relevant to language acquisition and processing. They are mandatory - language learning and language processing are automatic. Moreover, language is subserved by specific dedicated neural structures, damage to which predictable and systematically impairs linguistic functioning. All of this suggests a specific ‘mental organ’, to use Chomsky’s phrase, that has evolved in the human cognitive system specifically in order to make language possible. The specific structure is organ simultaneously constrains the range of possible human language s and guide the learning of a child’s target language, later masking rapid on-line language processing possible. The principles represented in this organ constitute the innate linguistic knowledge of the human being. Additional evidence for the early operation of such an innate language acquisition module is derived from the many infant studies that show that infants selectively attend to soundstreams that are prosodically appropriate, that have pauses at clausal boundaries, and that contain linguistically permissible phonological sequence.

It is fair to ask where we get the powerful inner code whose representational elements need only systematic construction to express, for example, the thought that cyclotrons are bigger than black holes. But on this matter, the language of thought theorist has little to say. All that ‘concept’ learning could be (assuming it is to be some kind of rational process and not due to mere physical maturation or a bump on the head). According to the language of thought theorist, is the trying out of combinations of existing representational elements to see if a given combination captures the sense (as evinced in its use) of some new concept. The consequence is that concept learning , conceived as the expansion of our representational resources, simply does not happen. What happens instead is that the work with a fixed, innate repertoire of elements whose combination and construction must express any content we can ever learn to understand.

Representationalist typifies the conforming generality for which of its inclusive manner that by and large induce the doctrine that the mind (or sometimes the brain) works on representations of the things and features of things that we perceive or thing about. In the philosophy of perception the view is especially associated with the French Cartesian philosopher Nicolas Malebranche (1638-1715) and the English philosopher John Locke (1632-1704), who, holding that the mind is the container for ideas, held that of our real ideas, some are adequate, and some are inadequate. Those that have in adequacy to, are those represented as archetypes that the mind supposes them taken from which it tends them to stand for, and to which it refers them. The problem in this account were mercilessly exposed by the French theologian asb philosopher Antoine Arnauld (1216-94) and the French critic of Cartesianism Simon Foucher (1644-96), writing against Malebranche , and by the idealist George Berkeley, writing against Locke. The fundamental problem is that the mind is ‘supposing’ its ds to represent something else, but it has no access to this something else, except by forming another idea. The difficulty is to understand how the mind ever escapes from the world of representations, or, acquire genuine content pointing beyond themselves in more recent philosophy, the analogy between the mind and a computer has suggest that the mind or brain manipulates signs and symbols, thought of as like the instructions in a machines program of aspects of the world. The point is sometimes put by saying that the mind, and its theory, becomes a syntactic engine rather than a semantic engine. Representation is also attacked, at least as a central concept in understanding the ‘pragmatists’ who emphasize instead the activities surrounding a use of language than what they see as a mysterious link between mind and world.

Representations, along with mental states, especially beliefs and thought, are said to exhibit ‘intentionality’ in that they refer to or stand for something other than of what is the possibility of it being something else. The nature of this special property, however, has seemed puling. Not only is intentionality oftentimes assumed to be limited to humans, and possibly a few other species, but the property itself appears to resist characterization in physicalist terms. The problem is most obvious in the case of ‘arbitrary’ signs, like words, where it is clear that there is no connection between the physical properties of a word and what it demotes, and, yet it remains for Iconic representation.

Early attempts tried to establish the link between sign and object via the mental states of the sign and symbols user. A symbol # stands for ✺ for ‘S’ if it triggers a ✺-idea in ‘S’. On one account, the reference of # is the ✺idea itself. Open the major account, the denomination of # is whatever the ✺-idea denotes. The first account is problematic in that it fails to explain the link between symbols and the world. The second is problematic in that it just shifts the puzzle inward. For example, if the word ‘table’ triggers the image ‘‒’ or ‘TABLE’ what gives this mental picture or word any reference of all, let alone the denotation normally associated with the word ‘table’?

An alternative to these Mentalistic theories has been to adopt a behaviouristic analysis. Wherefore, this account # denotes ✺ for ‘S’ is explained along the lines of either (1) ‘S’ is disposed to behave to # as to ✺:, or (2) ‘S’ is disposed to behave in ways appropriate to ✺ when presented #. Both versions prove faulty in that the very notions of the behaviour associated with or appropriate to ✺ are obscure. In addition, once seems to be no reasonable correlations between behaviour towards sign and behaviour towards their objects that is capable of accounting for the referential relations.

A currently influential attempt to ‘naturalize’ the representation relation takes its use from indices. The crucial link between sign and object is established by some causal connection between ✺ and #, whereby it is allowed, nonetheless, that such a causal relation is not sufficient for full-blown intention representation. An increase in temperature causes the mercury to rise the thermometer but the mercury level is not a representation for the thermometer. In order for # to represents ✺ to S’s activities. The flunctuational economy of S’s activity. The notion of ‘function’, in turn is yet to be spelled out along biological or other lines so as to remain within ‘naturalistic’ constraints as being natural. This approach runs into problems in specifying a suitable notion of ‘function’ and in accounting for the possibility of misrepresentation. Also, it is no obvious how to extend the analysis to encompass the semantical force of more abstract or theoretical symbols. These difficulties are further compounded when one takes into account the social factors that seem to play a role in determining the denotative properties of our symbols.

No comments:

Post a Comment