Harold I. Brown

     

Sellars, Concepts and Conceptual Change*

      A major theme of recent philosophy of science has been the rejection of the empiricist thesis that, with the exception of terms which play a purely formal role, the language of science derives its meaning from some, possibly quite indirect, correlation with experience. The alternative that has been proposed is that meaning is internal to each conceptual system, that terms derive their meaning from the role they play in a language, and that something akin to "meaning" flows from conceptual framework to experience. Much contemporary debate on the nature of conceptual change is a direct outgrowth of this holistic view of concepts, and much of the inconclusiveness of that debate derives from the lack of any clear understanding of what a conceptual system is, or of how conceptual systems confer meaning on their terms.

      While this debate has been going on, and for some time before it began, Wilfrid Sellars has been developing an holistic theory of conceptual systems which may provide the framework needed to advance discussion. Sellars is deeply interested in these questions, and has written on them, but he has not developed the detailed case studies that has characterized much recent philosophy of science. At the same time, philosophers who proceed by means of case studies have not made use of Sellars' analysis of conceptual systems. My aim in this paper is to attempt to bridge this gap by first developing Sellars' views on meaning and conceptual frameworks, and then illustrating, all too briefly, how these views can be applied to scientific case studies.

      Sellars uses the terms 'language' and 'conceptual framework' as if they were synonymous, and I will follow that usage in this paper. It will be convenient to approach Sellars' views on meaning in terms of four questions:

      1. What determines the meanings of the terms of a language?

      2. How does one learn a first language?

      3. How does someone who already knows one language learn another language?

      4. How is new language, particularly new theoretical language, introduced?

      For terms with empirical significance, classical empiricism provides a single, unified response to all four questions. On this view, terms get their meanings from associations with sensations, and a child learns a language by having these associations displayed before her. A new language can be learned in essentially the same way as a first language was learned, although the fact that the learner has a language available may facilitate matters. New terms are introduced into a language either as a result of the experience of qualitatively new sensations or, more commonly, through the introduction of terms that stand for combinations of sensations. Terms of the latter sort are a convenience, and are in principle eliminable. It should be noted that terms without empirical significance, particularly the logical constants, do not fit easily into this account; we will return to this matter shortly.

      For Sellars, each these questions will receive a fundamentally different answer. This will require the rejection of the classical empiricist framework, but Sellars will also argue the the classical approach is not completely mistaken; rather, some aspects of this approach will reappear in Sellars' analysis, although they will play a less dramatic role than was attributed to them by their classical proponents. Let us examine Sellars' attempt to answer each of these questions.

     

1. Meaning

      According to Sellars, the meaning of a term is determined by its "role" in a language, but Sellars notion of a "linguistic role" involves a number of elements that must be distinguished. To begin with, a major part of the meaning of most terms lies in what Sellars calls their "conceptual status" {1} (SPR, pp. 316-317 et passim), where a term's conceptual status is determined by its "syntax." For many philosophers, talk about a term's syntax immediately evokes the standard distinction between syntax and semantics, along with the idea that a non-logical term may derive part of its meaning from its formal relations to other terms, but certainly requires some correlation with extra-linguistic objects if it is to be genuinely meaningful. This is not what Sellars has in mind. Rather, of his major goals is to show that this picture is seriously mistaken, an aim that he pursues primarily by attacking the notion of "semantics" that it involves, and much of this section will be devoted to an examination of that attack. There are, however, several matters that must be dealt with first.

      To begin with, Sellars' notion of a term's syntax (i.e., its "conceptual status") is rather different from the standard one. For Sellars

     

the conceptual meaning of a descriptive term is constituted by what can be inferred from it in accordance with the logical and extra-logical rules of inference of the language (conceptual frame) to which it belongs (SPR, p. 317).

      That is, a term's "syntax" is constituted by the set of inferences in which it plays a substantive role, and a major part of this syntax derives from extra-logical, or as Sellars prefers to call them, "material" connections between that term and other terms, where these material connections are mediated by empirical laws, rather than by analytic propositions. For example, on hearing thunder we infer that lightning will occur shortly, and we make this inference quite as readily as the inference from the presence of an uncle to the presence of a male, even though the former inference is mediated by a causal law, and the latter by an analytic proposition. For Sellars, both types of inference enter into the conceptual meaning of a term, and while Sellars accepts a distinction between analytic and synthetic propositions, he rejects the view that this coincides with a distinction between propositions that are determinative of meaning and those that are not. (See, for example, CIL, IM, SPR, pp. 90, 316-317, 330-331.) This is a complex and controversial proposal, but I will not examine it any further in the present paper.

      While this understanding of syntax is central to Sellars' views on meaning, it does not nearly exhaust his notion of a "linguistic role." One additional element enters in when we recognize that not all linguistic roles are conceptual. A term such as 'alas', for example, plays a definite role in our language, and thus has meaning, but it does not have conceptual meaning since it does not license any inferences -- i.e., the sentence, "Alas, today is Thursday" does not permit any inferences that are not permitted by, "Today is Thursday." (Cf. SPR, pp. 114-115.) Sellars' point here is a subtle one. The former sentence does permit inferences about the speaker's state of mind that are not licensed by the latter sentence, but neither sentence is about the speaker's state of mind. Rather, both sentences are about the day of the week, and adding 'alas' does not permit any inferences about the day of the week that cannot be drawn without this addition.

      More importantly, if a language is to be applicable to extra-linguistic reality, some of the terms of the language must be correlated with extra-linguistic objects, and these correlations are part of the meaning of those terms. Thus a necessary condition for sameness of meaning of two terms that refer to extra-linguistic objects is that they are applied to the same class of objects. For example, Sellars maintains that the German word 'rot' can have have the same meaning as the English 'red':

     

only if in addition to conforming to syntactical rules paralleling the syntax of 'red', it is applied by Germans to red objects; that is, if it has the same application as 'red'. Thus the 'conceptual status' of a predicate does not exhaust its 'meaning'. (SPR, p. 316. See also SPR, p. 335, SRI, p. 176.)

      Correlations between terms and extra-linguistic objects come, according to Sellars, in two varieties: language entry transitions and language exit transitions. The former occur when we move from some non-linguistic item into a language, e.g., when, on experiencing a particular sensation, we call it 'red'; the latter occur when we move from language to something non-linguistic, e.g., when, having said that I am going to take a walk, I actually begin to walk. I will be much concerned with language entry transitions in the course of this paper, since Sellars holds that empiricist theories of meaning have gone astray by over emphasizing this aspect of meaning. According to Sellars, language entry transitions provide an essential part of the meaning of descriptive terms, but they provide only a part of the meaning of those terms. Language exit transitions play a similarly essential role in the meaning of other terms, while some terms can be fully meaningful, and have full conceptual status, without being involved in either type of transition -- e.g., logical constants, modal, and prescriptive terms. For example, the term 'ought' is not itself tied to extra-linguistic reality by either language entry or language exit transitions, although part of the meaning of 'ought' lies in its having a motivating role, i.e., once I accept the sentence "I ought to do A," I tend to make the language exit transition to actually doingA. Empiricists, who, according to Sellars, over emphasized the importance of ties between language and the world, often treated 'ought' as if it had only a motivating role, but no true meaning. (SPR, pp. 350-351.) For Sellars, this is a mistake which derives from a failure to recognize that there are many linguistic roles, that different terms get their meaning in different ways, and that a tie to the non-linguistic world provides only a part of the meaning of some terms.

      I now want to consider language entry transitions in a bit more detail, since this will bring us to the heart of Sellars' critique of traditional semantics. Sellars' view of the nature of these correlations between terms and extra-linguistic objects is very different from that which occurs in traditional empiricism. They cannot, for example, be assimilated to the logical empiricists' "correspondence rules," for these rules correlate terms of a theoretical vocabulary with those of some pre-existing vocabulary, while language entry transitions correlate descriptive terms with non-linguistic items. For the same reason, language entry transitions cannot be identified with C. I. Lewis' "sense meanings," since these are expressible as analytic propositions and are thus internal to a single language. (See, for example, SPR, pp. 113, 333, 357.) And most importantly, language entry transitions are not to be understood as "semantical rules" that correlate terms with extra-linguistic reality. It is in the course of arguing against this last proposal that Sellars develops the major arguments for his own theory of meaning. Sellars offers three arguments that are designed to show that there is something fundamentally wrong with the idea of a semantical rule; I turn now to these arguments. {2}

      A typical example of a semantical rule is given in (1).

      (1) 'Red' means red.

      For empiricists these rules do double duty: they express the meaning of the term in quotes, and they are the linguistic counterpart of the process by which we presumably teach the meaning of a term by associating it with some extra-linguistic item. Thus, in the example, 'red' is mentioned in its first occurrence, used in its second occurrence. In general, semantical rules have the form indicated in (2), where the same

      (2) '.....' means _____.

      sign is to be inserted in both blanks. In its first occurrence this sign indicates the word whose meaning is to be specified or taught, in its second occurrence it indicates the thing itself for which this word is to stand.

      Sellars begins his attempt to show that there is something wrong with this approach to the specification of meaning by turning to the two key areas in which, historically, empiricists have had problems making it work: logical constants and theoretical terms of science. Concentrating, first, on the logical constants, we note that something goes wrong if we try to substitute a word such as 'and' into schema (2), since there is no extra-linguistic object that constitutes the referent of 'and'. Of course, the fact that this schema fails to work in some cases does not, by itself, show that it is inappropriate in all cases, but Sellars will attempt to defend the view that (2) is indeed a universally applicable vehicle for stating the meaning of a term, but that philosophers in the empiricist tradition have been overly fascinated with this rubric, while misunderstanding the kind of context in which it is appropriate. Rather than being used for the introduction of meaning, Sellars argues, (2) is appropriately used to indicate the translation of a word from one language into another language. Thus (3) and (4) are equally appropriate examples of this usage:

      (3) 'Rouge' (in French) means red,

      (4) 'Und' (in German) means and.

      It might seem that, since we are now expressing a correlation between two words, the items on the right hand side of (3) and (4) should be in quotes, but Sellars argues that this is not the case. If we consider these expressions from the point of view of the thesis that the meaning of a term is its role in a language, we would utter (3) or (4) if we were engaged in explaining the meaning of a French or German word to an English speaker. In both cases we are telling the English speaker that the unfamiliar word in quotes plays the same role in its language that the familiar word on the right hand side of the expression plays in English, and in doing so we are inviting the English speaker to rehearse her use of the term on the right hand side if she wishes to understand the meaning of the term on the left hand side (SPR, p. 315). In these expressions, then, 'red' and 'and' are being used, not mentioned, in the sense that they are being exhibited (SPR, p. 163). This is, Sellars acknowledges, a somewhat unusual example of using a word, but an adaptation of one of Sellars' favorite analogies will help clarify his point.

      Suppose I am playing chess in a somewhat unusual format. All of the pieces used by a single player are the same shape, one player perhaps using cubes, the other tetrahedrons, and the different pieces are distinguished by their colors -- kings are purple, queens are royal blue, etc. In addition, we are playing on a 64 square linear tape. Someone who is watching but having difficulty in following the game might, after a particularly bewildering move with a black piece, point to it and ask me what it is. I could respond by reaching into a nearby box of standard chess pieces, and pulling out a knight which I exhibit, thereby indicating that black objects play the same role here as knights play in standard chess. Analogously, according to Sellars, when I utter a specific instantiation of (2) I am verbally displaying a term from a language that the person I am talking to already knows, in order to inform her that this term plays the same role in her language as the new term whose, meaning I am explaining, plays in its language.

      In what sense, if any, does this constitute an argument for the thesis that meaning is determined by linguistic role? At best it constitutes a very modest step by suggesting that, from the linguistic role perspective, a common locution which is troublesome in some cases to the classical empiricist can be given a coherent, uniform interpretation. Moreover, this interpretation can immediately be extended to the second class of problematic terms, the theoretical term of science. The problem of theoretical terms amounts to the problem of attempting to determine what belongs on the right hand side of (2) when we put a term such as 'molecule' into the left hand side. But if we think of (2) as a translation rubric, then (5) is a straightforward instance of this rubric,

      (5) 'Molekül' (in German) means molecule,

      and we have a uniform interpretation of (2) for all cases. But, again, noting that such a uniform interpretation of (2) is available hardly constitutes an argument for a theory of meaning. Indeed, empiricists have two responses available that are worth considering at this point.

      The first is to note that by shifting to a discussion of translations between existing and already acquired languages, we have avoided a pair of crucial questions that empiricists have always been concerned with: how does an individual acquire a language in the first place, and how do we introduce new terms into an existing language? Sellars would, of course, agree that he has not yet touched these questions, but he would insist that these are in fact different questions than the one we are now considering -- the question of what determines the meanings of the terms of a language. The reasons for this claim will be developed below.

      The empiricist's second response is to agree that schema (2) can certainly function as a translation rubric in the way Sellars suggests, while noting that nothing has been said about whether it also functions as a semantical rule that is used to specify the meaning of at least a wide class of empirically important terms. In order to deal with this objection we must turn to Sellars' second major argument, an argument that is intended to show directly that the classical notion of a semantical rule is incoherent. It will be useful here to meet the empiricist on her own ground and look at the way in which a first language is learned.

      According to the traditional thesis, a child who has not yet mastered any language can learn the meanings of most terms of a language by ostension. Taking a presumably simple case, the adult trainer teaches the child the meaning of 'red' by pointing to red objects and uttering the sound 'red'; after this process has continued for a while, the child develops the ability to utter 'red' under the appropriate circumstances. Up to this point Sellars is in complete agreement with classical empiricists: items in the environment act on our senses and generate sensations in us; and we can be trained to respond to these sensations with a conventional sound -- but so can a parrot, and this points to the key issue on which Sellars breaks from the empiricist tradition. According to that tradition, I have learned the meaning of the term once the above training process has been completed, but for Sellars I have not yet learned a meaning at this point. To see why, we must distinguish between a causal state, even a causal state mediated by a habit, and an epistemic or a cognitive state (e.g., SPR, pp. 90, 131, 133). Understanding the meaning of a term is a cognitive matter, but as far as we have gone in the above example, the new language learner need only be in a causal state when she says 'red' under the appropriate circumstances. The parrot, presumably, is in a causal state at this point, as is the child who has learned to carry out a wide variety of expected behaviors in a pure stimulus-response fashion. For example, children commonly learn to perform on cue such ritual recitations as grace at meals or the Pledge of Allegiance, but the child often has no grasp of what the words being uttered mean, a point that becomes clear when we find the child replacing words that occur in the ritual utterance with homophones or near homophones -- even though the meaning of the homophone does not fit the context of the ritual. The development of habits of saying the right thing in the appropriate circumstances is, Sellars will argue, indeed a necessary step in the process of learning a language, but it is neither identical with nor sufficient for mastering the meaning of a term.

      To see what else is needed we require another distinction that is a close cousin to the distinction between causal and cognitive states: a distinction between acting in accordance with a rule and obeying a rule (SPR, pp. 324-327). The key point here is that obeying a rule makes certain epistemic demands on us that are not made when we merely act in accordance with a rule. In ethics the distinction is quite straighforward. Obeying a moral rule involves understanding the moral situation and recognizing the relevance of the rule in question; acting in accordance with a rule requires no such recognition. It can occur purely by accident, or in total ignorance of the rule, or in the absence of any awareness of the morally relevant features of the situation. In other contexts, action in accordance with a rule can be wholly the result of causal factors -- a falling stone moves in accordance with the rule that its acceleration shall vary inversely as the square of its distance from the center of the earth, but the stone stands in no cognitive relation to this rule, and does not obey it. Similarly, a bee's dance is part of a complex pattern, each move of the dance occurs because of the structure of the pattern, and the bee is thus engaged in what Sellars calls, "pattern governed behavior" (SPR, pp. 326-327), but this is still not rule obeying behavior since the bee presumably does not understand the pattern and the role that the various moves play in this pattern.

      Now Sellars maintains that once we recognize the above point, it is clear that we cannot learn a language by learning to obey rules that are instances of schema (2). For such rules in effect tells us that we should, for example, use the word 'red' to refer to red objects, but we cannot obey this rule unless we are already capable of recognizing that the object before us is red. Yet to do this, we must be in a position also to recognize that we are visually observing the object, that the conditions are appropriate for assessing its color, that it is not simultaneously some other color, and much more. In other words, the ability to recognize that the object is red requires that we already have the concept of red (IM, pp. 335-336, SPR, pp. 312-313, 333-334). Thus the view that meanings can be learned by learning to obey semantical rules of form (2) has things backwards: the ability to obey such rules presupposes that we have already learned the meaning of the term in question.

      This leads directly to Sellars' third argument, for the thesis that we learn meaning via semantical rules is only plausible if it is possible for us to master concepts one by one. But, as was suggested above, the mastery of a single concept, even a supposedly simple concept such a red, requires the mastery of many other concepts. To see why, consider a being who would seem to have learned just one term: 'red'. This being always utters 'red' in the presence of red objects, and only in the presence of red objects, but has some strange behaviors with respect to these objects. Given a variety of pegs of different sizes, shapes and colors, and a board with holes that fit the different pegs, this being only picks up the red pegs, says 'red', and then attempts to fit these pegs into all of the holes, apparently oblivious to the sizes and shapes of the pegs and holes. Given another set of objects of various colors, sizes, etc., some of which are edible and some not, this being again selects only red objects, utters 'red', and attempts to eat all of them. On observing this behavior, I think that we would entertain serious doubts about whether this being has mastered the concept of red, but we would then have to consider whether one can master 'red' without having mastered such other concepts as color, size, shape, and maybe even edibility.

      To see just how damaging this last point is to the empiricist cause, consider how an empiricist might respond. "What," the reply will go, "does the concept of shape or the concept of edibility have to do with the concept of red? These are logically distinct concepts, and any one of them can be mastered without any awareness of the others." But as the above examples indicate, this claim will not bear examination, and the crux of the issue is the difference between being in a state in which I exhibit an acquired behavior in the presence of the appropriate stimulus, and the case in which I understand the meaning of a term. The former can occur as a single isolated instance, independent of any other causal response, the latter cannot occur in isolation. Thus, as Sellars continually insists, the grasping of a single concept requires the grasping of an entire body of concepts. The following passage is typical:

     

Now it just won't do to reply that to have the concept of green, to know what it is for something to be green, it is sufficient to respond, when one is in point of fact in standard conditions, to green objects with the vocable 'This is green'. Not only must the conditions be of a sort that is appropriate for determining the color of an object by looking, the subject must know that conditions of this sort are appropriate. And while this does not imply that one must have concepts before one has them [I will return to this point below], it does imply that one can have the concept of green only by having a whole battery of concepts of which it is one element. (SPR pp. 147-148.)

      If we accept these arguments, we must now ask what exactly Sellars has accomplished. It may seem that he has slid between two different questions -- what determines the meanings of the terms of a language, and how does an individual learn the meanings of the terms of her first language? In doing so, Sellars would appear to leave the contemporary empiricist with an easy response, for she can concede Sellars' point about language learning, maintaining that this is not the essential issue, and that it is an historical accident that, in the seventeenth and eighteenth centuries, purely logical questions about the meanings of terms became confused with questions about the process by which language is learned. The latter is a matter for empirical psychology, the former is the only logically and philosophically relevant question, and twentieth century empiricists have been clear on this point.

      I know of no place where Sellars explicitly considers this reply, but I think he could respond by denying that the empiricist tradition can cut itself off from its historical roots quite so easily. Although the thesis that the meaning of descriptive terms is to be found in extra-linguistic reality has often seemed extremely plausible, there is nothing about it that makes it logically compelling, and the demand that we attempt to account for all descriptive meaning in this way requires some justification. Classical empiricists, who did not always adhere to contemporary views of the relation between logic and psychology, attempted to justify the hypothesis that meaning flows from sensation to language on the basis of a theory of how language is learned. Contemporary philosophers who wish to purge the theory of meaning of such psychological considerations can, perhaps, maintain that this hypothesis is to be justified by its fruitfulness, yet its failures with respect to the analysis of logical constants and theoretical terms of science are well known, and we have now seen that it also faces substantial problems with respect to the analysis of language learning too, the point at which it once seemed strongest. Thus, while Sellars is indeed sliding between two distinct issues in the course of the above argument, this sliding is legitimate because he is building a dialectical argument against the classical empiricist, who cannot successfully separate these issues. The resulting argument is not a knock-down-drag-out refutation, but such refutations are extremely rare in philosophy. We have, however, been provided with sufficient grounds for stepping outside of the empiricist framework and considering what we can do with an alternative theory of meaning; most of the remainder of this paper can be seen as an exploration of what Sellars is able to accomplish on the basis of an holistic alternative.

     

2. Learning a First Language

      Sellars' holistic theory of meaning would seem to present us with a major problem, for it appears to require that we must learn a substantial body of language before we could learn any of its parts, and this, in turn, would seem to make language learning logically impossible. One virtue of the empiricist approach to meaning is that it makes sense out of the idea that a language can be learned piecemeal. This problem is exacerbated if we hold that in using a language meaningfully we must obey a set of linguistic rules. For, as Sellars notes (SPR pp. 312-322), these rules would be formulated in a metalanguage that would, presumably, have to be learned before we could learn the object language, and this will generate an infinite regress. Sellars must address these questions if his approach is to have any plausibility. Now the development of a theory of how language is in fact learned is a subject for empirical research; thus my only concern here will be to respond the objection that holistic theories of meaning of the Sellarsian type are logically incompatible with the fact that people do learn languages.

      It was noted above that the crucial difference between understanding a language and merely uttering words in response to stimuli amounts to the difference between obeying rules and acting in accordance with those rules. But this does not entail that the process of learning to act in accordance with linguistic rules is irrelevant to the process of language learning. Rather, there is, according to Sellars, a genuine insight in the old empiricist tale. Language learning, Sellars maintains, involves two logically distinct stages. The first stage requires building up an appropriate set of habits, habits that, according to Sellars, come in two varieties. One type of habit requires learning to connect words with things in the appropriate manner, i.e., it involves the development of language entry and language exit transitions; the second type of habit involves learning to make such intra-linguistic moves as that from "thunder now" to "lightning soon" (SPR, pp. 313, 333). Once these habits are in place, we are ready for the second stage, which involves the transition from acting in accordance with these habits to being able to obey linguistic rules -- it is only this second step that occurs in a single leap.

      Once this second step has been taken, the two types of linguistic habits that were distinguished above become the basis for two types of linguistic abilities. The first set of habits, which Sellars compares to "that part of the wiring of a calculating machine which permits the 'punching in' of information" (SPR, p. 313), provides the basis for the ability to observe in the full sense of the term. This includes, among others, the capability of recognizing that conditions are indeed standard, and thus the ability to move from an awareness of red to the conclusion that there is probably a red object in my environment; or to recognize that conditions are non-standard, and that my awareness of red indicates that there is probably an orange object reflecting light to me. Sellars compares the second set of habits to "that part of the wiring of a calculating machine which takes over once the 'problem' has been punched in" (SPR, p. 313, see also p. 333). These habits provide the basis for the ability genuinely to infer, where "an intra-linguistic move is not in the full sense an inference unless the subject not only conforms to but obeys syntactic rules. . . ." (SPR, p. 334, see also SPR p. 313 n. 4, and p. 327.) Again, a parallel from ethics will help clarify the relation between these two stages. Thinking along roughly Aristotelian lines, we can divide the moral training of an individual into an initial stage in which we seek to instill the correct habits of behavior, and a latter stage at which this individual comes to understand why this sort of behavior is morally desirable; it only at the latter stage that we have a morally conscious person.

      This two stage process does much to eliminate the air of mystery that so often surrounds the suggestion that a language must be grasped holistically. The individual does not make a single mysterious transition from grasping nothing to grasping everything. Rather, the foundations for grasping a language are indeed built up piecemeal -- but at the level of acquisition of verbal habits -- and it is only after a substantial body of such habits has been developed that we come to understand the entire body of language involved in a single step. The problem of an infinite regress of metalanguages can now be addressed. For the task of learning to obey a set of rules is exactly the task of grasping the metalanguage, and Sellars' point is that rather than having to learn the metalanguage before we learn the object language, we first learn the object language at the level of pattern governed behavior, and the holistic step is the grasping of the metalanguage.

      The above is by no means a complete nor an unproblematic view of language learning; my only concern has been to develop Sellars' view of first language learning enough to undercut the objection that his theory of meaning makes language learning impossible. The most pressing problems arise, I think, when we ask how much of our language we are supposed to grasp in a single holistic leap. Do we, for example, have distinct sets of habits that become language in the full sense as a result of distinct holistic leaps, or do we make one such leap involving all current habits, after which the present analysis is no longer relevant, since we are not languageless individuals any more? If we learn distinct frameworks, how do they come to be related to each other? Sellars often writes as if we wield a number of distinct, but related languages -- observation language, modal language, etc. -- and I will attempt to develop this view further in section 3. Sellars also seems to think that the process of grasping a language does not occurs just once. Rather, it first occurs on some minimal level, which is then subject to enrichment. Thus he writes that, "the language of observation is learned as a whole; we do not have any of it until, crudely and schematically, perhaps, we have it all," (SPR, p. 339, italics mine) and elsewhere, "having the concept ''is a matter of degree, ranging from having a rudimentary knowhow to having a very subtle knowhow with respect to '.'" (1974b, p. 125.) Unfortunately, Sellars does not develop this idea in any detail. Some insight can, however, be gained into these matters by considering the learning of further languages once an initial language has been learned. This is another question that Sellars does not consider in detail, and in the next section I will attempt to develop a Sellarsian view on this issue. Again, I do not propose to preempt the empirical study of language learning. Rather, I will be concerned only to show that Sellars' approach avoids some difficulties that have been raised against other holistic theories of meaning.

     

3. Learning Further Languages

      Consider the situation of an English speaker who is learning German, and let us focus first on the truth-functional logical connective 'und'. This is an example of a term that can be learned rather easily: I can be given a direct translation of the German 'und' into the English 'and' because these two words play the same role in their respective languages. Such directly translatable terms need not be restricted to logical constants. There is no reason why 'rot' cannot play the same role in German -- i.e., pick out the same set of objects and license the same set of inferences -- as does 'red' in English; and so on for other terms. This need not occur for all terms in the two languages, but I want to concentrate first on cases where it does occur.

      The first point that concerns me is that when we compare two languages L1 and L2, it is possible that we will find some terms in the two languages that do directly translate even if there are others that do not. The truth-functional logical constants provide the clearest cases, and it will be useful to consider them in the context of an extreme example. Suppose that there is no direct translation at all of any substantive terms between L1 and L2, i.e., the two languages embody different images of the world, and categorize the objects that one encounters in quite different ways. Even in this case, the two languages could share an identical set of logical constants. Thus, although no substantive sentence -- empirical, moral, religious, metaphysical, etc .-- of L1 can be straightforwardly and literally translated into a sentence of L2, each language may include a symbol, which we can represent as X, that is used in accordance with the following rule: whenever sentences S1 and S2 are assertable, then a more complex sentence consisting of the concatenation of S1, S2 and X in some specified order is also assertable. {3}

      Now the idea that two languages which embody different conceptual structures can, nevertheless, have some terms in common, is especially important because it gives us a handle on a problem that has long plagued holistic theories of meaning. If, it is argued, the meaning of a term is determined by its relations to the other terms in the language, then it would seem that any change, at any point in this structure, would have the effect of redefining all of the terms in our language, and this does not seem plausible. C. I. Lewis, for example, is particularly vulnerable to this criticism. Lewis maintains that the terms of a language are defined by sets of analytic propositions that map out their relations to all of the other terms of the language (linguistic meaning), and that provide rules for picking out sensory givens that conform to these concepts (sense meaning). (See Lewis 1946, ch. 6.){4} Because Lewis views a language as a monolithic formal structure in which, whether directly or indirectly, every term is linked to every other term by analytic propositions, any alteration of any connection in the language will have some ripple effect on all other terms of the language. Indeed, given Lewis' preferred way of describing such matters, we would have to say that any alteration of any of these analytic connections amounts to the rejection of the current language and its replacement by a new one.

      On Sellars' approach, however, we do not have to describe the process by which languages are developed and changed in such extreme and unilluminating terms. One reason for this is already clear: given that it is possible for some terms to have the same meaning (i.e., play the same linguistic role) in two substantially different languages, then alterations of the meanings of some terms of a language need not entail alterations in the meanings of all terms of that language. Let us explore this suggestion in the context of an example that is more interesting than that of the logical constants.

      Consider an individual who, having learned one scientific conceptual system, say classificatory biology, undertakes to learn a different science, e.g., classical mechanics; let us focus on a concept that occurs in both frameworks such as time. An holistic theory of meaning of the sort that Lewis defends would require us to maintain that this person begins her study of classical mechanics with one concept of time, and ends up with a very different concept when her study of this new field has been completed, since the addition of classical physics to her body of knowledge will have the effect of establishing many new connections between 'time' and other concepts that do not occur in the framework of biology. But it seems more reasonable to suggest that the same concept of time occurs in both frameworks, i.e., that the notion of time plays the same role in both frameworks, and that rather than having to learn a new concept of time when learning physics, our scientist is able to transfer this concept directly from the framework of biology to that of classical physics -- and this may simplify the learning process. In other cases something a bit more complex may occur. For example, a purely mathematical framework such as linear algebra or differential equations will be transportable in toto into various different scientific fields. To be sure, the terms of the framework may come to play different roles in different fields, and thus take on new meanings, but this need not have a feedback effect on the meanings of these mathematical terms in other frameworks which make use of the same mathematics.

      Sellars' approach provides us with a way making sense of these suggestions in the context of an holistic view of meaning. For, in Sellarsian terms, a language is not a single formal structure. Rather, we can view an individual as wielding a number of different languages that are essentially independent of each other, although each of these languages may share some common terms with other languages. A change in the role of a term in one of these languages will have a ripple effect on the meanings of the other terms in that language, but the effect will be relatively isolated. If I have learned both classical physics and biology, and my physical concept of time undergoes a transformation, my original biological concept of time can remain intact. Indeed, there is no incoherence at all in my maintaining, for example, the framework of classical physics and of evolutionary biology, which share a time concept, and the framework of relativity physics, with a different time concept. {5}

      Thus we can view the individual who learns a second (or third, etc.) science as learning a new language, a language which involve new concepts, but which also includes some familiar old ones, and which does not necessarily merge with the older languages into a single, monolithic conceptual structure. Indeed, the notion of learning a second language is particularly apt here. A bilingual individual need not have two concepts of conjunction or two concepts of time, nor need these concepts undergo systematic redefinition as the new language is learned -- even if the way of conceptualizing some aspects of reality that is embodied in the new language varies significantly from that in the original language. Nor need this individual be the possessor of a single massive language, say, English-cum-German. Rather, the multilingual individual possesses several distinct languages. She is able to do some of the same things in several languages, and some different things in different languages, and the fact that different languages share some terms should facilitate the process of learning new languages.

      Moreover, even within a single language, a change in the meaning of one term need not affect the meanings of every other term. Recall that, for Sellars, the conceptual meaning of a term is a function of the inferences it licenses, and there is no reason to insist that a change in the inferential role of 'time' must entail a change in the inferential role of 'and'. In other words, even on an holistic view, the pieces of a language need not be taken to be as tightly interlocked as philosophers such as Lewis thought.

      Thus far I have been considering the case of languages that share identical concepts. There is a second case that is central to Sellars' philosophy and that is considerably more common: terms whose roles in two languages are similar, but not identical. Cases of this sort are particularly common in science, and one of the major virtues of Sellars' theory of meaning is that it provides a way of understanding the notion of terms that have similar meanings; this, in turn, will provide us with a powerful tool for the analysis of conceptual change. In order to develop these ideas, we must consider Sellars' views on the role of analogy in the introduction of new scientific frameworks.

     

4. Analogy

      Sellars' most detailed discussion of analogy in science is developed in the context of a critique of Mary Hesse's analysis of the role of analogy in the introduction of theoretical terms. According to Hesse, new theoretical entities are always introduced into science on the basis of an analogy with familiar objects that provide a model on which the new entities are conceived. These new entities are not identical with those of the model. Rather, according to Hesse, the properties that characterize the model fall into one of three classes: the "positive analogy," which consists of those features of the model that the new entities share, and that provide the basis for the analogy; the "negative analogy," i.e., those features of the model which the new entities do not share; and the "neutral analogy," comprising those features of the model whose possession by the new entities is currently an open question (Hesse 1966, p. 8). The neutral analogy is especially important for Hesse since she argues that it is the attempt to move the properties in this class to either the positive or negative analogy that provides the driving force for new discoveries. For example, at one stage in the development of the concept, molecules were conceived on the analogy of small, hard particles such as billiard balls. Mass was part of the positive analogy, color was part of the negative analogy, and being composed of smaller particles was part of the neutral analogy. On this account we can, if we are clever enough, find analogies between any objects whatsoever, and this is as it should be. 'Analogy' is a pragmatic notion, and the analogies we draw are a function of what we are attempting to accomplish in a specific case.

      Thus far Sellars and Hesse are in agreement. In particular, the difference between analogy and identity is captured in Sellars' thesis that an analogy is always accompanied by a commentary "which qualifies or limits -- but not precisely nor in all respects -- the analogy between the familiar objects and the entities which are being introduced by the theory." (SPR, p. 182. See also SRI, p. 182.) However, a Sellarsian commentary will involve considerably more complexity than can be captured in Hesse's distinction between the positive, negative and neutral analogies, since analogies, on Sellars' view, may be rather more complex than on Hesse's view. To see why, we must consider two points on which Sellars and Hesse disagree.

      The first is that while, for Hesse, the meanings of new terms are determined by the model, for Sellars meaning is determined by linguistic role. Thus it is possible, in principle, for the meanings of the terms of a scientific theory to be "fully captured by the working of a logistically contrived deductive system" (SRI, p. 179), although in practice this does not occur. This is because we human beings are not capable of introducing and learning to deploy complex new concepts solely from an explicit specification of their linguistic roles. Our actual dependence on models is so great that while, for Sellars, models are ideally only an heuristic device, they do not function as a mere heuristic device, but rather enter into the logic of the terms that they are used to introduce. Sellars presents his position as a middle way between Hesse's view that the meanings of theoretical term are wholly determined by analogy, and Nagel's view (which Hesse is particularly concerned to reject) that models are a pure heuristic device that make no contribution to the meanings of the terms they help us understand. Sellars describes Nagel's view as "at best a half-truth," which is 50% more truth than Hesse would concede it, and goes on to explain:

     

It is a half-truth because theoretical postulates are often specified in a way which logically involves the use of the model. And even when a set of postulates is explicitly given in the form prescribed by contemporary logical theory, it turns out, in actual practice, (although ideally it need not) that the conceptual texture of theoretical terms in scientific use is far richer and more finely grained than the texture generated by the explicitly listed postulates. (SRI, pp.178-9.)

      The sense in which a model may be logically involved in the specification of a term is best explained by looking very briefly at one of Sellars' own major applications of his views on the analogical introduction of predicates: his thesis that the conceptual framework of sensations is introduced by analogy with the conceptual framework of physical objects (see SPR, SM, SK, for discussion of this thesis). One step in the introduction of this framework is the specification of, for example, the sensation of red as that sensation which occurs when we are seeing red physical objects under normal circumstances, and which is related to other predicates in a way that is analogous to the way physical redness is related to other properties of physical objects. The claim that sensations of red are just those sensations that occur when we see red physical objects under normal circumstances is neither a definition of 'sensation of red', nor a part of such a definition: sensation talk and physical object talk belong to different frameworks and are not mutually interdefinable. Still, this claim is a necessary truth because, given the role that it plays in the introduction of the term in question, it is inconceivable that any sensation other than the sensation of red occurs when we perceive a red physical object in standard conditions. {6}

      The second point of disagreement between Hesse and Sellars derives from Hesse's claim that the similarities that provide the basis for an analogy are analyzable in terms of identities and differences between properties of the objects in question, and indeed, that such an analysis is necessary to avoid an infinite regress. After discussing some examples Hesse writes:

     

These examples suggest that when similarities are recognized they are described in some such way as, "Both analogues have property B, but whereas the first has property A, the second has instead property C. It may be that when the nature of the similarity is pressed, it will be admitted that the analogues do not have the identical property B, but two similar properties, say B and B', in which case the analysis of the similarity of B and B' repeats the same pattern. But if we suppose that at some point this analysis stops, with the open or tacit assumption that further consideration of difference between otherwise identical properties can be ignored, we have an analysis of similarity into relations of identity and difference. (Hesse, 1966, pp. 70-71.)

      Hesse goes on to maintain that such analysis is a necessary condition for the construction of analogical arguments, but while the evaluation of such arguments is among her main concerns, we will not pursue that issue here.

      In response to Hesse, Sellars argues that the attempt to reduce analogies to identities and differences between first-order properties amounts to the claim that any new entities that are introduced by analogy are, in principle, wholly understandable in terms of language that is already familiar to us, and that this view prevents us from appreciating "how the use of models in theoretical explanation can generate genuinely new conceptual frameworks and justify the claim to have escaped from the myth of the given" (SRI, pp. 183-184). Languages, for Sellars, are produced by human beings in the course of our attempts to understand and deal with the world around us, and the development of new knowledge, particularly the development of science, involves the introduction of new ideas, and requires the invention of new language that is not wholly specifiable in the vocabulary that we already have available. This, in turn, Sellars maintains, requires that analogies be based only on similarities, not on identities, at the level of first-order properties, but, Sellars suggests, these similarities can be analyzed in terms of identities of second-order properties. For example, we can draw an illuminating analogy between points on a line and moments of time without maintaining that space and time share any first-order properties; it is sufficient for purposes of the analogy that they share such second-order properties as transitivity and asymmetry (SRI, p. 180). Similarly, we can think of sensations as analogous to properties of physical objects on the basis of second-order properties (e.g., color sensations are necessarily accompanied by extension sensations), without thinking of sensations as colored, extended, etc.

      In this discussion Sellars seems to be arguing that all analogies that are used to introduce theoretical entities are based on identities of second-order properties, and one commentator, Gary Gutting, has undertaken to defend this view in the context of the kinetic theory of gasses.

     

In the case of kinetic theory and its billiard-ball model, we should not identify molecular mass, position, and velocity with billiard-ball mass, position, and velocity; these predicates should be regarded as having different meanings. However, the similarity enters in because molecular and billiard-ball predicates have second-order predicates in common (e.g., in both cases mass is an intrinsic, nonrelational property, velocity is expressible as the first derivative of position, position is a continuous variable, etc.). (Gutting, 1977, p. 85.)

      There are, however, two objections to this claim. First, it is not at all clear that, for example, 'molecular mass' and 'billiard-ball mass' are different predicates. For these to be different predicates they would have to differ in at least some of their second-order properties, but Gutting's list can be extended to include all of these properties. Thus it seems that 'molecular mass' and 'billiard ball' mass are the same concept, and that we are dealing here with an example of a concept that was transferred from the macroscopic to the microscopic framework. Of course 'molecule' and 'billiard ball' are still different concepts, and part of this difference is captured in the fact that some properties of biliard balls -- e.g., color -- are not properties of molecules. But this does not prevent predicates that apply to both from being the same predicate in each case.

      Second, as Hesse points out in a response to Sellars (Hesse, 1970, pp. 177-178), the objection to the reduction of analogy to identity of first-order properties would seem to apply to higher order properties as well, for it still permits the complete explication of newly introduced concepts in terms of concepts in an existing conceptual framework.

      We can extricate ourselves from this difficulty by taking seriously the claim that ultimately meaning is determined by linguistic role. Thus the necessary condition for a concept's being new is that it play a role in its framework that is not identical with the role of any term in any preexisting framework. Models, on this view, play an heuristic role by helping those of us who have already mastered one framework achieve an "intuitive grasp" (SRI, p. 182) of a new framework. The new entity cannot share all of the properties of its model if it is to be a new kind of entity, but it must share some properties with its model if the model is to be of any use. Sellars' point that these need not be first-order properties is an important step forward in understanding the ways that models function, but the identities between the newly introduced entity and its model can occur on a number of different levels of discourse. It is the job of the commentary to indicate where these identities occur.

      Note that analogy has two functions here. First, it provides a means by which we can think new thoughts and thus invent new concepts and new languages. Thus Sellars' discussion of analogy provides the key to his answer to question number four. For while the meanings of the terms of a language are a function of their roles in that language, new terms with new linguistic roles can be introduced on the basis of analogies with existing language. Second, once a new language has been invented, analogies provide a means of explaining these new concepts to others.

      The above analysis of the relation between linguistic role and analogy is not only attractive in its own right, but is also consistent with Sellars' attack on the "Myth of the Given," for one of the aims of that attack is the rejection of the thesis that there exists an intrinsically basic language. If there is no such language, then any language can function as a first language; and since no analogies can be drawn until some language is available, the meanings of the terms of any linguistic framework must, in principle, be specifiable without benefit of analogies. {7}

      We can now return to the problems involved in learning those portions of a new language that cannot be mapped identically onto any language that we already know, equipped with the idea that the gap between the two languages can be bridged by analogies. Whether this is in fact applicable to the learning of natural languages is a question that cannot be considered here. It is, however, definitely applicable to the case in which a person who is already scientifically literate is learning a new scientific framework. This can be illustrated by considering how we might explain the framework of elementary quantum theory to such an individual.

     

5. Elementary Quantum Theory

      Let me begin with a rapid summary of the structure of this theory. Observables are represented by operators in Hilbert space, and the solution of the eigenvalue equation for an operator provides a complete set of basis vectors for describing the various states of the system. Taking energy as an example, and restricting ourselves to the nondegenerate case, the eigenvectors represent stationary states of the system, and the eigenvalue associated with each eigenvector is the energy of the system in that state. A system will generally not be in a stationary state; rather a typical state of the system will be a linear combination of eigenstates, and the square of each coefficient of this linear combination gives the probability that the system will be found to be in that state when its energy is measured.

      This brief formal tale can be made considerably more intelligible in terms of a set of convenient analogies, and the next few paragraphs constitute an attempt to write a Sellarsian commentary for this example.

      The idea that the eigenvectors of an operator provide a set of basis vectors for the state space is an analogical use of the familiar image of Cartesian coordinates in physical space, which serves as the model. {8} In this model, we describe three dimensional physical space in terms of a set of three mutually perpendicular axes and a unit vectors along each axis. These unit vectors, which function as the basis vectors, are linearly independent, and a vector from the origin to a typical point in space can be written as a linear combination of the three unit vectors. Intuitively, the linear independence of the basis vectors is captured in their being perpendicular to each other, and we find the coordinate of a point with respect to a particular axis by dropping the perpendicular from that point to the axis in question. Formally, this independence can be expressed by introducing the notion of a scalar product of a pair of vectors, i.e., a function which maps a pair of vectors to a scalar. Although many functions are available, one turns out to be particularly useful: the product of the lengths of the vectors and the cosine of the angle between them. In our intuitive picture, the scalar product of a pair of vectors gives the length of the projection of each of them on the other, and in particular, the scalar product of a vector V with a basis vector gives the coefficient to be used in expressing V as a linear combination of the basis vectors. Note also that the scalar product of a pair of perpendicular vectors will be zero.

      Now this formal structure, along with its associated image, can be taken over into quantum theory as long as we are clear on the limits of the analogy. For example, the energy eigenvectors can function as basis vectors for a "state space," and any state of a system can be thought of as a vector in this "space," in a way that is closely analogous to the more familiar case of vectors in physical space. Several steps are needed to make the analogy viable. First, we need a scalar product, i.e., an appropriate mapping of any pair of vectors to a scalar. The appropriate mapping turns out to be the integral of the product of one of the vectors with the complex conjugate of the other. The need for a complex conjugate derives from the fact that quantum mechanical state functions are complex, while we want the scalar product to be a real number. Nothing quite like this occurs in computing a classical scalar product, and this is one point at which it is clear that we are dealing with an analogy. {9}

      Given a scalar product, we can now introduce the idea of orthogonality, again by analogy with the classical case: classically orthogonal vectors yield a scalar product of zero, thus we can take any pair of vectors with a zero scalar product to be orthogonal. Quantum mechanical eigenvectors turn out to be orthogonal given the above definition of the scalar product.

      Next we need to generate quantum mechanical basis vectors of unit length. Returning to the model, we note that the scalar product of a vector with itself is equal to the square of its length. Drawing on this idea, we can define the "length" of a quantum mechanical vector as the square root of the scalar product of that vector with itself; dividing each eigenvector by its "length" yields the desired orthogonal unit vectors. We can now express each state vector as a linear combination of the basis vectors, where the coefficients of the linear combination are, as in the model, the scalar products of the state vector in question with the respective basis vectors -- but unlike the model, these coefficients may be complex. Also, unlike the model, these coefficients do not give a "length" along the basis vector, rather the product of a coefficient with its complex conjugate gives the probability that the system will be found in that eigenstate when a measurement is made. End of commentary.

      While these analogies play an extremely useful role in providing an intuitive sense of what is going on in quantum theory, the fact that they are analogies, with very substantial disanalogies, underlines the point that they do not provide the full meaning of the quantum mechanical concepts we are dealing with. A further look at eigenvectors will help to clarify the reasons why it makes sense to suggest that the full meaning of this notion in quantum mechanics is determined by the role that eigenvectors play in this framework. I am going to divide this discussion into two parts: first I will look at the Sellarsian syntax of 'eigenvector', then I will examine the way this notion is tied to extra-linguistic reality.

      Operators and their eigenvalue equations are a familiar mathematical structure that was developed independently of quantum mechanics, and which has applications in a number of fields of pure and applied mathematics. Students often study this structure in mathematics courses before they encounter its use in quantum theory, and the mathematics of eigenvalue equations can be considered an example of a pre-existing conceptual structure that is taken over into quantum theory in toto, but which then comes to play a role that it plays nowhere else, a role that is a function of the particular mathematical and observational features of quantum theory. This includes, among other items, the role of eigenvectors in the description of state functions, the existence of alternative sets of basis functions provided by the eigenfunctions of noncommuting operators, the way that state functions change with time, as described by the Schrdinger equation, and the role of state functions in the projection postulate. Note particularly that while the relevant mathematics, as well as concepts from classical physics such as the Hamiltonian, provide a bridge into quantum theory, the mere juxtaposition of these pieces does not constitute the conceptual framework of quantum theory. The syntax of this framework is determined by the particular way these pieces are related to each other and, as a result of entering into this new system of relations, each of these pieces becomes involved in new kinds of inferences, and thus takes on new meaning.

      State functions are connected with the world in two ways: first, the physicist describes a physical system in the language of quantum theory by writing down the appropriate Hamiltonian operator -- the operator whose eigenvalue equation yields the energy eigenfunctions; second, the eigenvalues of an operator are interpreted as possible values of the system's energy. The first of these is of particular interest here. Part of a contemporary physicist's training consists in learning how to write down the Hamiltonian for different physical systems, i.e., how to make the language entry transition from a physical system to the theory, and the individual does not fully understand the concepts of the quantum theory until she has developed some ability to do this. As the student gains familiarity with this process, she begins to recognize a number of standard circumstances which can be handled in standard ways, and at least in these circumstances the process of jumping into the language of the theory becomes automatic. That is, the student develops a set of conditioned responses which take her directly from the physical situation into the language. The exact point at which an individual will enter the language of the theory will be largely a function of her experience, so that the trained physicist who, as a student, set up and solved, say, the eigenvalue equation for the hydrogen atom, can now write down the eigenvectors without having to go through any inferential process. This is wholly consistent with Sellars' view that language entry transitions are conditioned responses, as well as with his rejection of the given -- for Sellars' point in rejecting the given is to deny that there is any privileged port of entry for transitions between language and the world.

      On the other hand, a related example will further nail down the point that mastery of language entry transitions cannot be the entire story involved in mastering a language. Bubble chamber experiments typically produce thousands of photographs, of which only a few will contain evidence of physically interesting events. These photographs are often studied by people called "scanners," who are not physicists, but who have been trained to recognize the patterns that interest physicists. A scanner might recognize that a photograph shows an interaction between a neutrino and a proton, and thus be capable of language entry transition into the language of particle physics, even though she understands nothing about neutrino physics. This individual does not share the physicist's concept of a neutrino, and what is missing is a grasp of the concept's syntax.

      The importance of analogies in explaining quantum theory, along with the recognition that the meanings of the terms of the theory ultimately depend on their role in that theory, has not been missed by authors of textbooks on quantum mechanics. One author, after listing the fundamental concepts of quantum mechanics notes that, "Most of these concepts have been encountered by the reader before, but in some other context." He then gives several examples, and concludes:

     

To a large extent, then, these concepts are not truly unfamiliar; what is unfamiliar is the manner in which quantum mechanics assembles these concepts into a theory of the static and dynamic behavior of matter and radiation." (White 1966, p. 29.)

      Another author, in the course of his discussion of the mathematical framework of the theory, writes:

     

We are adopting the point of view that by means of analogies and arguments that make our choices plausible, we can be led to fruitful ideas. . . . We believe that this procedure is more suitable for developing quantum theory for a beginner than is the method in which one starts with a set of abstract postulates from which one makes a complete set of mathematical deductions that are compared with experiment. (Bohm 1951, p. 174.)

      This passage should be compared with Sellars' remarks, quoted in the course of our discussion of analogy, as to the difference between what we do in practice and what is ideally the case.

      There are many other aspects of quantum mechanics which could be analyzed along these lines. For example, we tend to think of electron spin as if it were a turning of the electron on its axis, and there are good reasons for doing so, but the fact that this is, after all, an analogy can be seen by noting that an electron must "spin" through 720 degrees to return to its original position. Similarly, although it is convenient to think of the energy levels of an electron in an atom in terms of physical orbits spaced around the nucleus, the fact that this is not literally correct can be seen by noting that the nucleus itself can be thought of as exhibiting a similar set of "orbits." In all of these cases the analogies play a vital role in teaching the concepts involved, and in thinking about the physical situations in question, but the full significance of these concepts is determined by their role in the conceptual framework of quantum theory.

     

6. Conceptual Change

      Many philosophers find the suggestion that concepts undergo change to be virtually unintelligible, maintaining instead that it only makes sense to talk of concepts being replaced by different concepts. Unfortunately, this view allows us to lose sight of the rather important point that a "new" concept can be more or less similar to the older concept it "replaces." It is in those cases in which, rather than simply rejecting a concept, we move to a similar concept, that talk of conceptual change is most clearly appropriate. In order to defend this claim, we must explore the notion that concepts can be more or less similar, and Sellars attempts to do this in an essay on conceptual change (CC). I want to examine the argument of that paper, and develop it somewhat further than Sellars does.

      Sellars begins by comparing the relation between (a) an isosceles Euclidean and (b) a scalene Euclidean triangle, with the relation between (c) a Euclidean triangle and (d) a Riemannian triangle. Intuitively, the items involved in the first comparison seem more similar than the items involved in the second comparison, and this can be captured a bit more precisely by noting that (a) and (b) can be viewed as species of the genus Euclidean triangle while (c) and (d) are species only of the wider genus triangle. Sellars' point here is that one important way of comparing two items is in terms of some reference class in which both of these items fit. The narrower the reference class, the greater the similarity between the items in question, and the idea of "relative class size" is adequately clear in cases in which one of the reference classes is a proper subset of the other. The class of Euclidean triangles is a proper subset of the class of triangles, and members of this subset will be more similar than items which can only be compared in terms of the wider class. To be sure, if we are clever enough we may be able to find a different way of making the comparison in which the relations of relative similarity will be reversed, but this is beside the point. 'Similarity', like its close cousin 'analogy', is a pragmatic notion, and the comparisons we make will depend on our interests in a particular context.

      Following up on this idea, Sellars suggests that, just as the concepts of an isosceles and a scalene Euclidean triangle are instances of the concept of a Euclidean triangle, so Euclidean and Riemannian triangles are instances of the concept of a triangle, i.e., they are different "triangle concepts." Similarly, classical and relativistic concepts of length are not just two different concepts, but rather two instances of length concepts.{10} I want to develop this last example in rather more detail than Sellars does.

      Let us begin by comparing the classical ideas of length in a two dimensional and in a three dimensional space. These are sufficiently familiar that we may be inclined to say that they are identical concepts, but it is worth noting that at least a small difference appears when we treat them formally. If we set up the customary Cartesian coordinates, and the standard Euclidean formulas for the distance between two points in terms of their coordinates (using the square of the distance for convenience) we get (6) for the two dimensional case and (7) for the three dimensional case:

      (6) d2 = x2 + y2

      (7) d2 = x2 + y2 + z2.

      Now one way of exploring the relation between these two expressions is by thinking of one of them as primitive and the other as derivative; I will consider each of these to be primitive in turn.

      First let us consider the three dimensional case as an analogical extension of the two dimensional case; further extensions then suggest themselves. To begin with, we can introduce higher dimensional spaces by adding further terms to our distance formula to represent the additional dimensions. For example, (8) gives the necessary expression for five dimensional space:

      (8) d2 = v2 + w2 + x2 + y2 + z2.

      In a similar manner, we can make the following analogical extension to one dimensional space:

      (9) d2 = x2.

      If this seems to be an unnecessarily cumbersome formula, it gives point to the suggestion that the extension is an analogical one: without the analogy we would not be likely to express a one dimensional distance in this way. Yet this assimilation of the one dimensional case to the higher dimensional cases prepares the way for treating time, which appears as an independent framework in classical physics, as just one more dimension.

      Now let us take (7) to be fundamental and consider (6) to be a special case of (7) for which the z term always vanishes. In doing this, we implicitly think of the terms of (7) as having coefficients which can take on the value of either zero or one:

      (10) d2 = Ax2 + By2 + Cz2.

      If we remove this limitation on the coefficients, permitting them to range over the real numbers, we are on our way to generalized Riemannian geometry, and need only bring in the cross products to complete the journey. For example, the general expression for three dimensional space (assuming isotropy) becomes:

      (11) d2 = Ax2 + By2 + Cz2 + Dxy + Exz + Fyz.

      There are two points that I want to draw out of this discussion. The first is that Sellars' remarks about "length concepts" can now be made precise: the necessary and sufficient condition for a length concept is that it be expressible in terms of the generalized Riemannian formula, where specific length concepts are determined by the particular values of the coefficients. For example, expression (7) provides the concept of spatial length in classical mechanics; expression (12)

      (12) d2 = x2 + y2 + z2 - (ct)2

      provides the concept of space-time length in special relativity; a full four dimensional Riemannian geometry with time as one dimension provides the concept of space-time length in general relativity. To change from one of these expressions to another is to change one's length concept, but this is quite different from the case in which we simply drop a concept and adopt a different one.

      In addition to providing some clarification of the notion of conceptual change, the above discussion also illustrates a general approach to the comparison of conceptual systems that that should provide a powerful tool for the analysis of actual cases of scientific change, i.e., by developing detailed comparisons of the ways in which the prior and the subsequent concepts are similar and different. Such comparisons will not always be quite as clean as in the present case, but I submit that it is through detailed comparisons of this sort -- both formally and in terms of the historical context in which the developments took place -- that we will come to understand how scientific concepts develop and change.

      In Sellarsian terms there are two ways in which such comparisons can be approached. One of these is by mapping out analogies, as has been illustrated above; the second takes us back to Sellars' thesis that meaning is ultimately determined by role in a conceptual system. This suggests that we pursue the systematic exploration of the roles that the concepts being studied play in their respective frameworks. For example, the classical and special relativistic length concepts are both used to describe the gaps between events, and any measurements of space and time made in accordance with relativistic prescriptions are also legitimate classically, although the converse does not hold. In addition, the two length concepts play similar formal roles in many computations in these two frameworks.

      The exploration of analogies and of comparative linguistic roles are not mutually exclusive approaches, but complementary, and together they should provide a great deal of information about the detailed relations between competing and successive conceptual frameworks.

     

7. Conclusion

      Although I have considered a number of aspects of Sellars' views on conceptual systems, I want to close by emphasizing one portion of this discussion. I began the paper suggesting that historically oriented philosophy of science lacks an adequate way of analyzing conceptual change because philosophers working in this vein have not had a clear notion of what a conceptual system is. I have attempted to show that Sellars offers an analysis of conceptual systems that provides a framework and a set of tools for the detailed exploration of the ways that scientific ideas develop and change. The question of whether the Sellarsian framework is fully adequate for this purpose can only be decided by actually applying it to the analysis of cases from the history of science, and we should expect the fruits of such studies to appear in two guises: both in a clearer understanding of scientific change, and in a sharpening and further clarification of the nature of conceptual systems.


References

      Bohm, D. (1951), Quantum Theory. Englewood Cliffs: Prentice-Hall.

      Burian, R. (1979), "Sellarsian Realism and Conceptual Change in Science," in P. Bieri, R. Horstmann, and L. Kruger (eds.), Transcendental Arguments and Science. Dordrecht: D. Reidel, pp. 197-225.

      Churchland, P. (1979), Scientific Realism and the Plasticity of Mind. Cambridge: Cambridge University Press.

      Gutting, G. (1977), "Philosophy of Science," in C. F. Delaney et al., The Synoptic Vision: Essays on the Philosophy of Wilfrid Sellars. Notre Dame: University of Notre Dame Press, pp. 73-104.

      Hesse, M. (1966), Models and Analogies in Science. Notre Dame: University of Notre Dame Press.

      Hesse, M. (1970), "An Inductive Logic of Theories," in M. Radner and S. Winokur (eds.), Minnesota Studies in the Philosophy of Science IV. Minneapolis: University of Minnesota Press, pp. 164-180.

      Lewis, C. I. (1946), An Analysis of Knowledge and Valuation. La Salle: Open Court.

      Sellars, W. (1948), "Concepts as Involving Laws and Inconceivable Without Them," Philosophy of Science 15, pp. 287-315.

      Sellars, W. (1953), "Inference and Meaning," Mind 62, pp. 313-338.

     

      Sellars, W. (1963), Science, Perception and Reality. New York: Humanities Press.

      Sellars, W. (1965), "Scientific Realism or Irenic Instrumentalism," in R. Cohen and M. Wartofsky (eds.) Boston Studies in the Philosophy of Science 2. Dordrecht: D. Reidel, pp. 171-204.

      Sellars, W. (1968), Science and Metaphysics. New York: Humanities Press.

      Sellars, W. (1974a), "Conceptual Change," in Essays in Philosophy and its History. Dordrecht: D. Reidel, pp. 172-188.

      Sellars, W. (1974b), "Reply to Marras" in Essays in Philosophy and its History. Dordrecht: D. Reidel, pp. 172-188.

      Sellars, W. (1975), "The Structure of Knowledge," in H. Castañeda (ed.) Action, Knowledge and Reality: Essays in Honor of Wilfrid Sellars. Indianapolis: Bobbs-Merrill, pp. 295-347.


Return to Main Page

Notes

      {*} I want to thank Peter Barker, Gary Gutting, Paul Teller, Michael Tye, and especially Richard Burian, for comments on earlier versions of this paper. [Back]

     


Editor's note: This article first appeared in Synthese 68 (1986): 275-307; and is presented here with the generous help and permission of Professor Harold Brown. (AC)
{1} The following abbreviations will be used in references to Sellars' texts (see the References for full citations)

      CC "Conceptual Change"
CIL "Concepts as Involving Laws and Inconceivable Without Them"
IM "Inference and Meaning"
RM "Reply to Marras"
SK "The Structure of Knowledge"
SM Science and Metaphysics
SRI "Scientific Realism or Irenic Instrumentalism"
SPR Science, Perception and Reality

      [Back]

      {2} I will consider Sellars' positive account of language entry transitions in section 2. [Back]

      {3} This description is offered in my language, a language that might be significantly different from both L1 and L2. [Back]

      {4} For Lewis there is no flow of meaning from sensation to language. The sense meaning of a term is wholly internal to the language, and it is the language that provides sense to the experiential given. If Lewis were a contemporary he would be described as holding a radical version of the theory-ladenness of observation. [Back]

      {5} The sense in which these can all legitimately be called "time concepts" will be discussed in section 6. [Back]

      {6} Further discussion of this point will take us too far afield, but I will note in passing that Sellars distinguishes between analytic propositions and propositions that are true by virtue of the meanings of their terms; the latter are necessary but not analytic. (See especially "Is there a Synthetic A Priori?" SPR, pp. 298-320) The basis for this distinction lies in Sellars' views, noted above, on the role of material connections in the determination of meaning. This necessity is always relative to a particular framework, and these necessary truths can vanish when a framework is replaced. [Back]

      {7} See Churchland (1979) for a detailed defense of this thesis. It might seem that this view is inconsistent with Sellars' position that public object language is logically prior to private object language, a view which provides the motivation for his attempts to show how the framework of sensations can be introduced on the basis of the framework of material objects, and to show how the framework of thought can be conceived as analogous to the framework of overt speech. But these analyses of logical priority take place wholly within what Sellars calls "The Manifest Image," i.e., Sellars is arguing only that these logical priorities hold within our present common sense conceptual framework, not that they hold in all possible frameworks. [Back]

      {8} Historically this analogy was developed in the mathematical theory of vector spaces before its use in quantum theory, but this does not alter its analogical status. [Back]

      {9} Formally the operations in classical physics can be viewed as involving the complex conjugate of one of the vectors, since a real expression is identical with its complex conjugate, but both historically and pedagogically there is no point to this observation until after the new structure has been constructed. [Back]

      {10} See Burian (1979) pp. 203-204 for further discussion. [Back]


Return to Main Page