Abductive Reasoning: Philosophical and Educational Perspectives in Medicine

 

Lorenzo Magnani

Department of Philosophy

University of Pavia (Italy)

 

 

The aim of this paper is to emphasize the significance of abduction in order to illustrate the problem solving process and to propose a unified epistemological model of medical reasoning. The paper introduces an epistemological model (Select and Test Model) of medical reasoning (diagnosis, therapy, monitoring) which can be described in terms of abduction (selective), deduction and induction (section 2). This model first describes the different roles played by these basic inference types in developing the various kinds of medical reasoning (section 3), then is connected with cognitive models of medical reasoning (section 5), and finally provides an abstract representation - an epistemological architecture (ST-MODEL) - of the control knowledge embedded in a medical Knowledge-Based System (KBS) (sections 3 and 6). Moreover, four meanings of the word abduction (creative, selective, automatic, to the best explanation) are discussed in order to clarify their significance in epistemology, psychological experimental research, and AI. In my opinion the controversial status of abduction is related to a confusion between the epistemological and cognitive levels, and to a lack of explanation as to why people sometimes deviate from normative epistemological principles.

Exploiting the epistemological model in order to design the general inferential behavior (control knowledge) of a medical KBS leads to creating a more complex one with an ontological level dealing with the entities and relationships belonging to the underlying domain knowledge. Different ontologies express diagnosis, therapy planning and monitoring but the three tasks can be executed by a single inference process in terms of abduction, deduction, and induction, in order to solve problems.

Hence the paper illustrates the distinction between basic medical science (and reasoning), and clinical science (and reasoning) in order to illuminate some basic philosophical issues in medical education (section 7). Finally section 4 provides a short introduction to the basic logical theme of the nonmonotonic character of abduction.

1. The centrality of abduction

Let's consider the following interesting passage, from an article of Simon's of 1965, published in the British Journal for the Philosophy of Science (Simon, 1965) and dealing with the logic of normative theories: <169>The problem-solving process is not a process of `deducing' one set of imperatives (the performance programme) from another set (the goals). Instead, it is a process of selective trial and error, using heuristic rules derived from previous experience, that is sometimes successful in discovering means that are more or less efficacious in attaining some end. It is legitimate to regard the imperatives embodying the means as `derived' in some sense from the imperatives embodying the ends; but the process of derivation is not a deductive process, it is one of discovery. If we want a name for it, we can appropriately use the name coined by Peirce and revived recently by Norwood Hanson [1958]: it is a retroductive process. The nature of this process - which has been sketched roughly here - is the main subject of the theory of problem solving in both its positive and normative versions.<170> (Simon, 1977, p. 151). Simon states that discovering means that are more or less efficacious in attaining their objective is a retroductive process. He goes on to show that it is easy to obtain one set of imperatives from another set by processes of discovery or retroduction, and that the relation between the initial set and the derived set is not a relation of logical implication.

The word <169>retroduction<170> used by Simon is the Hansonian neopositivistic one replacing the Peircian classical word abduction: they have the same epistemological and philosophical meaning. I completely agree with Simon: abduction is the main subject of the theory of problem solving and developments in the fields of cognitive science and artificial intelligence (AI) have strengthened this conviction.

As Fetzer has recently stressed, from a philosophical point of view the main modes of argumentation for reasoning from sentential premises to sentential conclusions are expressed by these three general attributes: deductive (demonstrative, non ampliative, additive), inductive (non-demonstrative, ampliative, non additive), fallacious (neither, irrelevant, ambiguous). Abduction, which expresses likelihood in reasoning, is a typical form of fallacious inference: <169>[it] is a matter of utilizing the principle of maximum likelihood in order to formalize a pattern of reasoning known as `inference to the best explanation'<170> (Figure 1) (Fetzer, 1990, p. 103). A hundred years ago, Peirce (1955) also was studying and debating these three main inference types of reasoning.

2. Problem-solving, trial and error, and the epistemological status of abduction.

The following sections deal with the concept of abduction which proposes a unified epistemological model of medical reasoning. First, it is necessary to show the connections between abduction, induction, and deduction and to stress the significance of abduction to illustrate the problem solving process. I have developed with others (Stefanelli et al., 1988; Lanzola et al., 1990; Ramoni et al., 1990) an epistemological model of medical reasoning (diagnosis, therapy, monitoring), called the Select and Test Model (ST-MODEL, see Stefanelli & Ramoni, this volume) which can be described in terms of the classical notions of abduction (selective), deduction and induction: it describes the different roles played by such basic inference types in developing various kinds of medical reasoning (Figure 2). It is thus connected with cognitive models of medical reasoning (see section 5) and provides an abstract representation - an epistemological architecture - of the control knowledge embedded in a medical Knowledge-Based System (KBS) (see sections 3 and 6).

The type of inference called abduction was studied by Aristotelian syllogistics, as a form of <F128M>apagwgh<F255D>, and later on by mediaeval reworkers of syllogism. In the last century abduction was once again studied closely, by Peirce (Peirce, 1931-1958). Peirce interpreted abduction essentially as a creative process of generating a new hypothesis. Abduction and induction, viewed together as processes of production and generalization of new hypotheses, are sometimes called reduction, that is <F128M>apagwgh<F255D>. As Lukasiewicz (1970, p. 7) makes clear, <169>Reasoning which starts from reasons and looks for consequences is called deduction; that which starts from consequences and looks for reasons is called reduction<170>.

To illustrate from the field of medical knowledge, the discovery of a new disease and the definition of the manifestations it causes can be considered as the result of the creative abductive inference previously described. Therefore, creative abduction deals with the whole field of the growth of scientific knowledge. However, this is irrelevant in medical diagnosis (and, as we will see in section 3, in therapy and in monitoring) where instead the task is to select from an encyclopedia of pre-stored diagnostic entities, diseases, and pathophysiologic states, which can be made to account for the patient's condition. On the other hand, diagnostic reasoning also involves abductive steps, but its creativity is much weaker: it usually requires the selection of a diagnostic hypothesis from a set of pre-enumerated hypotheses provided from established medical knowledge. Thus, this type of abduction can be called selective abduction (Magnani, 1988). Selective abduction implies uncertainty and corresponds to the heuristic classification problem-solving model proposed by Clancey (1985), it deals with a kind of rediscovery, instead of a genuine discovery. <169>Automatic abduction<170>, a special form of abduction, occurs when hypotheses are automatically derived from a stored series of previous similar experiences. In this case there is no uncertainty. It covers a mental procedure that tapers into a non-inferential one, and falls into the category called <169>perception<170> (see below in this section and Anderson, 1987, p. 38-44).

Induction in its widest sense is an ampliative process of the generalization of knowledge. Peirce distinguished three types of induction and the first was further divided into three sub-types. A common feature of all kinds of induction is the ability to compare individual statements: using induction it is possible to synthesize individual statements into general laws (types I and II), but it is also possible to confirm or discount hypotheses (type III). Clearly I am referring here to the latter type of induction, that in my model is used as the process of reducing the uncertainty of established hypotheses by comparing their consequences with observed facts.

Deduction is an inference that refers to a logical implication. Deduction may be distinguished from abduction and induction on the grounds that only in deduction is the truth of inference guaranteed by the truth of the premises on which it is based.

All these distinctions need to be exemplified. To describe how the three inferences operate, it is useful to start with a very simple example dealing with diagnostic reasoning:

(1) If a patient is affected by a beta thalassemia,

his/her level of hemoglobin A2 is increased.

(2) John is affected by a beta thalassemia.

(3) John's level of hemoglobin A2 is increased.

By deduction we can infer (3) from (1) and (2); by induction we can go from a finite set of facts, like (2) and (3), to a universally quantified generalization, like the piece of hematologic knowledge represented by (1). Starting from knowing - selecting - (1) and observing (3) we can infer (2) by performing a selective abduction. Such an inference is not affected by uncertainty, since the manifestation (3) is pathognomonic for beta-thalassemia. However clinicians very often have to deal with manifestations which can be justified by different diagnostic hypotheses.

Thus, selective abduction is the making of a preliminary guess that introduces a set of plausible diagnostic hypotheses, followed by deduction to explore their consequences, and by induction to test them with available patient data, (1) to increase the likelihood of a hypothesis by noting evidence explained by that one, rather than by competing hypotheses, or (2) to refute all but one. (Figure 3).

If during this first cycle new information emerges, hypotheses not previously considered can be suggested and a new cycle takes place: in this case the nonmonotonic character of abductive reasoning is clear (see section 4).

There are two main epistemological meanings (Figure 4) of the word abduction: (1) abduction that only generates plausible hypotheses (selective or creative) - and this is the meaning of abduction accepted in my epistemological model - and (2) abduction considered as inference to the best explanation, that also evaluates hypotheses (on this subject also see below section 5). In the latter sense the classical meaning of abduction as inference to the best explanation (for instance in medicine, to the best diagnosis) is described in my epistemological model by the complete abduction-deduction-induction cycle. All we can expect of my <169>selective<170> abduction, is that it tends to produce hypotheses that have some chance of turning out to be the best explanation. Selective abduction will always produce hypotheses that give at least a partial explanation and therefore have a small amount of initial plausibility. In this respect abduction is more efficacious than the blind generation of hypotheses.

To achieve the best explanation (for instance the best diagnosis), it is necessary to have a set of criteria for evaluating the competing explanatory hypotheses reached by selective abduction. Evaluation has a multi-dimensional character. Consilience (Thagard, 1988) can measure how much a hypothesis explains, so it can be used to determine whether one hypothesis explains more of the evidence (for instance, patient data) than another: thus, it deals with a form of corroboration. In this way a hypothesis is considered more consilient than another if it explains more <169>important<170> (as opposed to <169>trivial<170>) data than the others do. In inferring the best explanation, the aim is not the sheer amount of data explained, but its relative significance. The assessment of relative importance presupposes that an inquirer has a rich background knowledge about the kinds of criteria that concern the data. In the case of medical KBSs the deductive-inductive cycle requires a detailed conceptual organization of this type of background knowledge (see below section 6). Simplicity too can be highly relevant when discriminating between competing explanatory hypotheses; it deals with the problem of the level of conceptual complexity of hypotheses when their consiliences are equal. Explanatory criteria are needed because the rejection of a hypothesis requires demonstrating that a competing hypothesis provides a better explanation. Clearly, in some cases conclusions are reached according to rational criteria such as consilience or simplicity. Nevertheless, in reasoning to the best explanation, motivational, ethical or pragmatic criteria cannot be discounted. Indeed the context suggests that they are unavoidable: this is especially true in medical reasoning (for instance, in therapy planning), but scientists that must discriminate between competing scientific hypotheses or competing scientific theories are sometimes also conditioned by motivationally biasing their inferences to the best explanation.

Indeed, my epistemological model should be considered as an illustration of scientific theory change: in this case selective abduction is replaced by creative abduction and there is a set of competing theories instead of diagnostic hypotheses. Furthermore the language of background scientific knowledge is to be regarded as open: in the case of competing theories, as they are studied by epistemology of theory change, we cannot - contrary to Popper's point of view - reject a theory merely because it fails occasionally. If it is simpler and explains more significant data than its competitors, a theory can be acceptable as the best explanation.

Nevertheless, if we consider the epistemological model as an illustration of medical diagnostic reasoning, the modus tollens is very efficacious because of the fixedness of language that expresses the background medical knowledge: a hypothesis that fails can nearly always be rejected immediately.

When Buchanan illustrates the old epistemological method of induction by elimination - and its computational meaning, as a model of the <169>heuristic search<170> - , first advanced by Bacon and Hooke and developed later on by J. Stuart Mill, he is referring implicitly to my epistemological framework in terms of abduction, deduction and induction, as illustrative of medical diagnostic reasoning: <169>The method of systematic exploration is [...] very like the old method of induction by elimination. Solutions to problems can be found and proved correct, in this view, by enumerating possible solutions and refuting all but one. Obviously the method is used frequently in contemporary science and medicine, and is as powerful as the generator of possibilities. According to Laudan, however, the method of proof by eliminative induction, advanced by Bacon and Hooke, was dropped after Condillac, Newton, and LeSage argued successfully that it is impossible to enumerate exhaustively all the hypotheses that could conceivably explain a set of events. The force of the refutation lies in the open-endedness of the language of science. Within a fixed language the method reduces to modus tollens [...]. The computational method known as heuristic search is in some sense a revival of those old ideas of induction by elimination, but with machine methods of generation and search substituted for exhaustive enumeration. Instead of enumerating all sentences in the language of science and trying each one in turn, a computer program can use heuristics enabling it to discard large classes of hypotheses and search only a small number of remaining possibilities.<170> (Buchanan, 1985, pp. 97-98).

Finally we should remember, as Peirce noted, that abduction plays a role even in relatively simple visual phenomena. Many visual stimuli are ambiguous, yet people are adept at imposing order on them: <169>We readily form such hypotheses as that an obscurely seen face belongs to a friend of ours, because we can thereby explain what has been observed<170> (Thagard, 1988, p. 53). Philosophically, perception is viewed by Peirce as a fast and uncontrolled knowledge-production procedure (Anderson, 1987). Perception, in fact, is a vehicle for the instantaneous retrieval of knowledge that was previously structured in our mind through inferential processes. By perception, knowledge constructions are so automatically and instantly reorganized that they become habitual and diffuse and do not need any further testing. As stated above, in my epistemological model perception is considered as a form of automatic abduction.

3. Is medical reasoning abductive?

In accordance with the epistemological model previously illustrated, medical reasoning may be broken down into two different phases: first, patient data is abstracted and used to select hypotheses, that is hypothetical solutions of the patient's problem (selective abduction phase); second, these hypotheses provide the starting conditions for forecasts of expected consequences which should be compared to the patient's data in order to evaluate (corroborate or eliminate) those hypotheses which they come from (deduction-induction cycle).

In the case of medical KBSs the epistemological architecture which exploits the abduction-deduction-induction cycle (ST-MODEL) starts with an abstraction of the data which characterizes the problem to be solved (diagnosis, therapy, monitoring). An abstraction can be considered as a process of structuring incoming data in a smaller set of entities, according to the kind of medical knowledge available and the features of the problem at issue. The efficacy of such operations depends on accumulated expertise, which determines the organization of personal knowledge so that problems can be easily recognized and stated in a way that guarantees their solution by efficient use of available knowledge. Clancey's well-known distinction between definitional abstraction, qualitative abstraction and generalization and certain other aspects of abstraction are illustrated by Stefanelli and Ramoni (this volume), and Ramoni et al. (1990).

Patel, Evans and Kaufman (1989), characterizing physicians' performances in experimental research (in this case on diagnostic expertise), suggest a more pragmatic and active structure organized in a multi-level problem-oriented framework (previously developed in a broader model by Evans & Gadd, 1989). They identify appropriate abstractions as <169>units of knowledge<170> to code influentially doctor-patient interaction. So clinical knowledge is hierarchically organized from observations <F128M>Ž<F255D> findings <F128M>Ž<F255D> facets (diagnostic components) <F128M>Ž<F255D> diagnosis. Observations are units of information considered as potentially relevant according to the features of the problem-solving context. Findings are composed of sets of observations that are relevant in a diagnostic context. Facets are clusters of findings that are suggestive of diagnostic components. Specific combinations of facets lead to a diagnosis. The aim is to capture <169>how a clinician identifies problem-specific cues, concludes findings and derives meaning from higher-order relations in the data<170> (Patel & Groen, in press).

Selective abduction simply involves guessing a set of hypotheses starting from problem features identified by abstraction. Once hypotheses have been selected, they need to be ranked (Figure 2) so as to plan the evaluation phase by first testing a certain preferred hypothesis. As already suggested by Peirce, there are multiple criteria for ranking; in the medical domain they may be parsimony, danger, cost, curability, and so on, that are chosen according to the specific knowledge context. The worth of a hypothesis to be tested first is, of course, connected to epistemic and pragmatic collections of reasons that trace back to belief in its truth and general relevance for medical action.

The deduction-induction phase deals with the actual process of hypotheses' evaluation. Deduction is connected with prediction. Once a hypothesis about a patient is established (for example, a diagnosis or a type of therapy), certain predictions derived at a time T1 (the presence of a certain symptom, the development of certain consequences, estimates of a particular evolution) can be revised at T2: the conclusions are defeasible, that is they may be retracted when new information establishing that the condition <169>all other things being equal<170> has been disproved. Thus deduction in medical reasoning is a kind of nonmonotonic deduction that has been studied in the field of nonmonotonic logics and belief revision (Doyle, 1989; Harman, 1986; Poole, 1989). Induction (not used here to mean an ampliative process of the generalization of knowledge) corroborates those hypotheses whose expected consequences turn out to be consistent with the patient's data and refutes those which fail this test. It is important to remember, as I observed above in the previous section, that, in an ideal situation, it would be necessary to achieve the best explanation by evaluating the uneliminated competing hypotheses so as to test their explanatory power. Induction is the final testing of an abduced hypothesis: by completing the whole cycle of the epistemological model it produces the best explanation.

If new information suggests hypotheses not previously considered, a new cycle starts. The cyclic nature of the epistemological model stresses its nonmonotonic character, and this is even more the case for medical reasoning (see the following section).

Diagnosis, therapy planning and patient monitoring can be executed by an instance of the epistemological model described above, as shown in Figures 3 (diagnosis) and 5 (therapy planning). Of course the ontologies involved are different: there are diagnostic hypotheses, manifestations etc. in diagnostic reasoning; therapies, therapeutic problems and so on in therapy planning; alarms, critical conditions, emergency actions and so on in monitoring.

Diagnosis is the first task to be executed in medical reasoning. It starts from patient data that is abstracted into clinical features to be explained. Then, selective abduction generates plausible diagnostic hypotheses. Starting from the highest ranked hypothesis, deduction shows the findings that are expected if this hypothesis is true. Thus new laboratory or clinical examinations can be requested to verify unobserved expectations. Finally induction establishes whether hypotheses can be confirmed or refuted, or whether they are worth testing further, depending on how closely the observed findings compare with expectations. Induction is the final testing of an abduced diagnostic hypothesis and involves the whole cycle of the epistemological model produces the best diagnostic explanation.

In therapy planning that starts with the observational data and a diagnosis, when available, selective abduction generates plausible (i.e. potentially useful) treatments. The evaluation (deductive-inductive) phase focuses on each treatment in order to check its general relevance for the patient involved and to make predictions about possible consequences of the treatment on his/her clinical course. On the basis of these predictions, some therapies may be excluded from the initially abduced set (Quaglini et al., 1990). Both in diagnosis and in therapy planning, if new information suggests hypotheses (diagnostic or therapeutic) not yet considered, a new cycle takes place.

Monitoring, that is observing and controlling the course of a patient's condition, is the best strategy to test if the planned action proves to be properly effective. To this aim, a medical KBS should be able to predict the course of a patient's condition under the interaction of diagnosed disorders and the therapeutic plan adopted. The way in which monitoring is another instance of the epistemological model (ST-MODEL) is illustrated in Ramoni et al., 1990, and Stefanelli and Ramoni, this volume.

4. The nonmonotonic character of abduction

The nonmonotonic character of abductive reasoning is clearly described by logical models developed alongside the computational research. In the epistemological cycle abduction-deduction-induction the hypotheses can be withdrawn whenever new information becomes available and a new cycle takes place. What, then, is the logical meaning of inference to the best hypothesis, when achieved by restarting the cycle?

Abduction seems to be computationally involved in several intelligent tasks, such as diagnostic reasoning, planning, natural language understanding, learning, and image recognition (Symposium on Automated Abduction, 1990. Working Notes, Stanford University). This suspicion alone has prompted many efforts recently to understand the logical structure of abduction, such as Poole's (Poole, 1989) Framework for Default Reasoning, Jackson's (Jackson, 1989) Propositional Abductive Logic, Levesque's (Levesque, 1989) belief based Knowledge Level Account of Abduction, Konolige's (Konolige, 1990) General Theory of Abduction, and, to a certain extent, the Theory of Diagnosis from First Principles of Reiter (Reiter, 1987).

Although these approaches arise from different perspectives, they agree about certain characters of abduction:

(1) abduction is a form of nonmonotonic reasoning. It draws defeasible conclusions from incomplete information;

(2) abduction is a two-step inference (i.e., abduction is considered here as inference to the best hypothesis). It generates all possible conjectures and then chooses the best one using certain preference criteria, as for example, the minimality of conjectures.

There is no agreement about which preference criteria to adopt. Poole and Rowen (Poole and Rowen, 1990) list several criteria that have been proposed in the literature and it can be shown that some of these preference criteria are conflicting, i.e. in the same situation, they favour different conjectures. Moreover, all the proposed criteria do not work in all situations. For instance, the (syntactic) criterion of minimality (also called <169>Occam's razor<170>) is useless when the conjecture at hand is (syntactically) as simple as the conflicting conjectures.

Since Peirce's landmark definition (and also in my epistemological model), inference to the best explanation has been convincingly modelled as a process of generating and testing, more than as a process of generating and choosing. In this process, as described above, abduction underlies the testing-evaluating phase.

We are looking for a logical system containing something similar to the inference rule:

@FORMULA = <F128M> j <169>ÿ g<F255D>

@FORMULA = <F128M> <D>

@FORMULA = ÿ g <F14M>ÿ (1)<F255D>

@FORMULA = <F128M>ÿ _____<F255D>

@FORMULA = <F128>

@FORMULA = j

where <F128M>j<F255D> and <F128M>g<F255D> are propositional variables. Unfortunately, this inference rule corresponds to the well-known fallacy called affirming the consequent. We need to abandon the inference rule in (1) and look for another solution. This is the aim of research in the logical field (see Ramoni & Magnani, 1989)

The nonmonotonic character of abduction arises from the logical unsoundness of the inference rule. All recent logical accounts concerning abduction have pointed out that it is a form of nonmonotonic reasoning, i.e. reasoning drawing defeasible conclusions from incomplete information. It is important to allow the guessing of explanations for a situation, in order to discount and abandon old hypotheses, so as to enable the tentative adoption of new ones, when new information about the situation makes them no longer the best. Doyle (Doyle, 1988), pointed out that because of its unsoundness, this guessing does not exhibit the truth-preservative behavior of ideal rationality which characterizes the incremental deduction of classical logic, but the nonmonotonic behavior of limited rationality of commonsense reasoning.

Modern logic allows us to account for this dynamic behavior of abduction by the concept of belief revision. Belief revision is a dynamic notion dealing with the current stage of reasoning. At each stage of reasoning, if it is correct, a belief is held on the basis that that reasoning is justified, even if subsequent stages dictate its retraction. A logic of belief for abduction has been proposed by Levesque (Levesque, 1989), and the role of belief revision functions in abduction has already been studied by Jackson (Jackson, 1990).

5. The relations between the epistemological framework of abductive reasoning in medicine and certain cognitive models.

AI research has developed many computational tools for describing the representation and processing of information. Cognitive psychologists have found these tools valuable for developing theories about human thinking and for their experimental research. Notwithstanding this, the study of methods of inquiry falls primarily within the province of philosophers of science rather than of scientists themselves, principally because these issues are normative rather than descriptive. To escape relativism, epistemology is usually considered as the normative theory of objective knowledge, and thus does not need to take into account what psychology determines as the nature of individuals' belief systems. Logic and epistemology are concerned with how people ought to reason, whereas psychology is supposed to describe how people do think.

Empirical studies of cognitive psychology are descriptive: they are dedicated to the investigation of mental processes and are concerned with normative issues only in order to characterize people's behavior relative to assumed norms. AI, when examined as cognitive modeling, is normally descriptive: only when it is concerned with improving on people's performances does it become involved with what is normative.

Epistemology, AI and cognitive psychology can be used together to develop models that explain how humans think. Thagard's ideas are pertinent: <169>A psychological model should be more than internally coherent [as is the case with computer simulation]: we want it to account for experimental data about how people think. But sometimes, if a model is complex, it is not easy to see what its consequences are. Cognitive models, like many models in the social sciences, often postulate many interacting processes. The computer program enables a researcher to see whether the model has all and only the consequences that it was expected to have. Comparison of these consequences against experimental observations provides the means of validating the model in much greater detail than pencil-and-paper calculations might allow<170> (Thagard, 1988, p. 6).

I would like to illustrate the relationships and compatibility between my epistemological framework of medical reasoning and certain cognitive models of physicians' reasoning.

As we have seen in section 2, if abduction is considered as inference to the best explanation, abduction is epistemologically classified not only as a mechanism for selection (or for discovery), but for justification too. In the latter sense the classical meaning of abduction as inference to the best explanation (for instance in medicine, to the best diagnosis or the best therapy) is described in my epistemological model by the complete cycle abduction-deduction-induction (Josephson et al., 1986). Nevertheless, as we have seen, abduction can be considered simply as a mechanism for production of plausible hypotheses, and this is the case with my epistemological model. As Thagard has pointed out (Thagard, 1988, p. 53) the question was controversial in Peirce's writings too. Before the 1890s, Peirce discussed the hypothesis as follows: <169>Hypothesis is where we find some very curious circumstance which would be explained by the supposition that it was the case of a certain general rule, and thereupon adopt that supposition<170> (Peirce, 1931-1958, vol. 2, para. 624). When Peirce replaced hypothesis with abduction he said that it <169>furnishes the reasoner with the problematic theory which induction verifies<170> (ibid., para. 776). Thagard ascribes to the editors of Peirce's work the responsibility for having clouded this change in his thinking by including discussions of hypothesis under the heading of `Abduction', thus <169>obscuring his shift from the belief that inference to an explanatory hypothesis can be a kind of justification to the weaker view that it is only a form of discovery<170> (Thagard, ibid.). The need for a methodological criterion of justification is caused by the fact that an abduced hypothesis that explains a certain puzzling fact should not be accepted because of the possibility of other explanations. Having a hypothesis that explains a certain number of facts is far from a guarantee of being true.

I think this controversial status of abduction is related to a confusion between the epistemological and cognitive levels, and to a lack of explanation as to why people sometimes deviate from normative epistemological principles. An analysis of the differences between epistemological and cognitive levels would help to clarify the issue.

From an epistemological point of view, abduction as inference to the best explanation involves the deduction-induction cycle of testing by means of multi-dimensional criteria of evaluation. Abduction as inference that provides a possible explanation of some puzzling phenomenon, is only a mechanism of discovery (or in medical diagnosis, of selection). In this latter sense abduction is the `wild hunch' that may either be a brilliant breakthrough or a dead-end: nevertheless it implies uncertainty, which can be removed or reduced only by testing the implications of selected diagnostic hypotheses against the available data.

From an empirical point of view, for instance in the case of experimental research on the behavior of physicians, there is an external criterion of truth: the correctness of a diagnostic conclusion is already known (the best diagnosis) in relation to a particular condition, and this is compared to observations of a physician's performance. There exist many possibilities and many diagnostic performances are found: physicians make correct (best), or wrong diagnoses both by an abduction/deduction-induction cycle of testing (abduction considered as inference to the best explanation according to the complete cycle of my epistemological model), and by selective abduction (without the testing cycle).

The empirical regularities established by Patel and Groen (in press), from research on expert-novice comparisons illustrate, among other things, the role of forward reasoning and backward reasoning in medical diagnosis. Because of the revealed independence of recall phenomena from diagnostic accuracy (diagnostic accuracy is developmentally monotonic whereas recall is nonmonotonic; the development of expertise is not related to the development of increasingly better representations) the main results of this research lead to a rejection of (1) the theory of medical diagnosis as pattern recognition and (2) the theory of diagnostic expertise based on a set of production rules.

Here is a r‚sum‚ of certain important results from this empirical research into the various kinds of diagnostic reasoning. The first kind of reasoning - forward - as a strong problem-solving method that requires a great deal of relevant knowledge, is error-prone in the absence of adequate domain knowledge. The second - backward -, as a weak method, is used when domain knowledge is inadequate or when relevant prior knowledge is lacking. In my epistemological model, forward reasoning (knowledge-based heuristic search - Hunt, 1989) is consistent with selective abduction while backward reasoning (goal-based heuristic search - ibid.) is consistent with the deduction-induction cycle.

The research relates to the finding that, in solving routine problems in their domains, expert physicians tend to work forward from the available information to hypotheses. On the contrary, intermediate and novice physicians work from a hypothesis regarding the unknown, back to the given information. A strong relationship between diagnostic accuracy and the existence of forward reasoning has been established. (In standard experimental procedure, subjects are shown a written description of a clinical case and each subject is asked to read the clinical test for a specific period of time, after which it is removed. The subjects are asked to write down as much of the text as they can remember, and then to describe the underlying pathophysiology of the case. Finally, they are asked to provide a diagnosis - Patel and Groen, in press; see also Groen and Patel, 1988).

All expert physicians with completely accurate diagnoses revealed the use of pure forward reasoning, followed by evaluation in order to confirm and refine the diagnosis by explaining the patient's cues (Patel, Evans & Kaufman, 1989). When experts do not provide complete diagnoses, they use a mixture of forward and backward reasoning, that is, the generation of alternative possibilities (plausible hypotheses), followed by an evaluation phase in which the alternative diagnoses can be discriminated. The difference between accurate and inaccurate diagnoses is the presence of `loose ends'. This is also the case for intermediates who do not seem to be able to filter out irrelevant information: this causes the production of loose ends, that is the activation of irrelevant searches. On the contrary, the efficacious use of pure forward reasoning expresses the idea that <169>a distinguishing trait of experts [...] is a knowledge of what not to do<170> (Patel & Groen, in press).

In the case of doctor-patient interactive dialogues, analyzed using linguistic pragmatics methods, these authors argue that <169>it is expected that physicians initially adopt a data-driven strategy and later shift to a predictive reasoning strategy when they have a working hypothesis [...] the directionality of reasoning is in forward direction until some loose ends are generated, when the reasoning shifts to the backward direction to account for the loose data<170>. (ibid.). In this case also, experts arrive at accurate diagnoses because their initial hypotheses are generally accurate, which results in the accurate prediction of subsequent findings. Forward reasoning remains associated with accurate diagnosis; during this reasoning process scientific biomedical information is not used, whereas during predictive reasoning it is used. On the contrary, residents collect a number of alternative possible diagnoses, and thereby a number of loose ends which produce diagnostic inaccuracy.

In my opinion, the cognitive concept of forward reasoning is consistent with the selective abduction of my model, because both deal with an inference from data to hypotheses. Likewise, as previously mentioned, backward reasoning is consistent with the deduction-induction cycle, because both deal with an inference from hypotheses to data. Nevertheless, in order to avoid any misunderstanding, it is necessary to illustrate various differences:

(1) epistemologically, selective abduction always implies uncertainty, although it tends to produce hypotheses that have some chance of turning out to be the best explanation; at this stage it is not known which hypothesis is the best and this type of reasoning does not possess the resources to answer the question; on the contrary, from an empirical cognitive point of view, forward reasoning characterizes an expert's diagnostic accuracy, that is the diagnostic reasoning that is immediately successful and that establishes the best explanation. The selectivity considered as guessing plausible hypotheses is not relevant, rather forward reasoning seems to be consistent with the philosophical concept of automatic abduction described above;

(2) epistemologically, the deduction-induction cycle illustrates inference to the <169>best<170> explanation involving some multi-dimensional criteria of evaluation and of the elimination of hypotheses; on the contrary, the empirical cognitive results show that this kind of reasoning is typical of intermediates' diagnostic <169>inaccuracy<170> - although they recall better than experts and novices - because of the effect of the failure of forward reasoning, and of the consequent production of unnecessary searches (clearly judged <169>unnecessary<170> post hoc).

A similar problem was considered and analyzed by Simon in 1966 (Simon, 1966), in terms of the classical concepts of <169>problem solving<170> and <169>selective trial and error search<170>. When Simon observed that an important generalization referring specifically to <169>the kinds of thinking activities called `problem-solving'<170> was that <169>Problem solving involves a highly selective `trial and error' search of solution possibilities<170> (Simon, 1977, p. 277), he described something analogous to my complete abduction-deduction-induction cycle. He continued: <169>Problem solving searches require trial and error in that they generally do not go directly to the solution without traversing and retracing some blind alleys - sometimes many, sometimes few. When a person solves a problem without any backtracking whatsoever, we are apt to deny that he needed to think at all. We say, `He knew the answer', or `he didn't have to think; he did it by rote'.<170> (Ibid.)

From a psychological point of view, in the first case, when a person is required to think by <169>trial and error<170>, we have the empirical-cognitive side of my epistemological and normative complete abduction-deduction-induction cycle: reaching the correct solution (i.e., abduction as the best explanation) expresses abduction as involving both the generating and deductive-inductive phases of testing. In the second case, when a person solves a problem directly, the deductive-inductive phase of my complete cycle is missed out and abduction as inference to the best explanation can be accomplished without any testing phase. Moreover, in order to interpret correctly the notion of <169>best explanation<170>, it is necessary to emphasize, as Simon does, <169>that human problem solvers and computer programs that simulate them do not search for the `best' solution, but for the `best' solution that is `good enough' by some criterion.<170> (ibid., pp. 280-1). The best solution always has to satisfy contextual criteria.

6. The need for an epistemological architecture of medical KBSs

Abduction is becoming an increasingly popular term in AI (Pople, 1973, 1977, 1982; Reggia et al., 1983, 1984; Peng & Reggia, 1987a, 1987b; Sowa, 1984; Charniak & McDermott, 1985; Josephson et al., 1986; Johnson & Keravnou, 1988; Finin & Morris, 1989) especially in the field of medical KBSs.

Since the mid-'70s there has been widespread agreement among cognitive scientists that models of a problem solving agent should incorporate knowledge about the world (ontological commitment) and some sort of an abstract procedure (inferential commitment) for interpreting this knowledge in order to construct plans and take action. Going beyond the level of formalisms and programming tools, the concept of heuristic classification (Clancey, 1985), the distinction between deep and surface models (Steels, 1984), and the idea of generic tasks and task-specific architectures (Chandrasekaran, 1983) took advantage of the increasing epistemological interest of KBSs to elaborate some basic issues to do with reasoning.

This attitude prompted the exploitation of the epistemological model described above in order to design the general inferential behavior of a medical KBS (Stefanelli et al., 1988). In this case the designer interprets epistemologically the machine's <169>general<170> interaction with its own environment: if <169>viewing knowledge as relative to an observer/designer's perceptions of dynamic indexical-functional relations between an agent and its environment is indeed a major theoretical reconceptualization of the process of constructing intelligent agents<170> (Clancey, 1989, p. 21), the epistemic resources of the model are the best way of building an architecture having the relative steadiness of a shared, rational, and objective inferential method. Moreover the model ought not to be judged by how faithfully it represents human processing unless that is its very point (Glymour, 1989, p. 365), but this does not mean that the way people seem to reason is not a matter of consideration when designing KBSs (Evans & Gadd, 1989).

The epistemological model needed to be made more complex with an ontological level dealing with the entities and relationships comprising the domain knowledge of the KBS. Different ontologies express diagnosis, therapy planning and monitoring, but to solve problems, the three tasks can be carried out by a single inference procedure in terms of abduction, deduction and induction (see section 3, above). The KBS ontology that adequately and <169>deeply<170> represents knowledge, as it is organized in scientific medical theories (causal or taxonomic) (Simon, 1985; Kuipers, 1987; Milne, 1989), goes beyond first generation <169>shallow<170> KBSs that only mapped knowledge into pragmatic constructs derived from human experts - in the latter case the ontology was compiled in conjunction with the inference procedure, thereby becoming implicit - (Chandrasekaran & Mittal, 1982). In this sense the new architectures combine a more principled knowledge of the domain with the simple heuristic knowledge that was the main type of knowledge exploited in first generation KBSs.

The need for representing various kinds of medical knowledge in a working KBS has specific consequences at the level of implementation. Ideally, it can be concluded that the choice of a convenient methodological medium originating from various disciplines (AI, mathematics, statistics, probability theory, decision theory, logic, and so on) for representing ontologically real medical knowledge (and to create the inference process) must be connected to the complexion of that knowledge and to the cognitive task at hand. A single method or formalism is not able to interpret efficiently all types of knowledge. Indeed the ontology that is embodied in intelligent computation is typically done with a certain inferential purpose in mind, and a good inference under computational and environmental constraints often requires various methods of representation, many formalisms and a mixture of the two. Exploiting these methods may actually improve performance and allow physician to explore the implications of certain observations or hypotheses to predict the effects of actions or the effects of future data on his/her behavior.

The architecture of NEOANEMIA (for details on how the epistemological model works in the context of this KBS see Lanzola et al., 1990), a working diagnostic system to manage disorders causing anemia, follows the epistemological abduction-deduction-induction model (ST-MODEL), as a shared framework permitting the knowledge engineer and the physician to cooperate in the development of the system thus overcoming the esoteric character of the underlying implementation. NEOANEMIA employs different representation formalisms and methods to achieve selective abduction and the deduction-induction cycle. Moreover the inference procedure (control knowledge) and the ontology (domain knowledge) are explicitly and separately represented. While in the hypothesis generation phase NEOANEMIA exploits compiled heuristic pathways specified by an expert, a separate and explicit representation of causal and taxonomic ontology is used in the deduction-induction phase. They have been represented using a simple two layer network (i.e. representing what clinical evidence may be expected for each disease) and QSIM (Kuipers, 1986, 1987; Ironi et al., in press) for representing available knowledge on pathophysiological system dynamics. A Therapy Advisor (TA) has been developed to plan therapies depending on etiopathologic mechanisms causing a patient's anemia, as diagnosed by NEOANEMIA (for details see Quaglini et al., 1989 and 1990).

It is important to remember the final considerations of section 5. In a diagnostic KBS selective abduction does not always imply uncertainty: sometimes in medical KBSs the selective abduction phase provides the best hypothesis immediately, that is the best explanation, because the selection has been very successful, to a certain degree simulating the efficacy of experts' forward reasoning. In this case the so-called evaluation-testing phase (deduction-induction) only provides the opportunity for an <169>explanation<170> of the abduced hypothesis, without performing any unnecessary discriminating movement. Thus the explanation can exploit the basic medical knowledge (causal or taxonomic) in KBS ontological models to make, for instance, communication between physicians or the processes of teaching and learning easier. Moreover, a medical KBS makes the transition to the deduction-induction phase, in order to exploit this cycle and to reduce previously abduced hypotheses, so as to reach the correct diagnostic conclusion. This does not reflect, as shown by experimental sciences on expertise (see section 5), the weakness (i.e. inaccuracy) of low-level diagnostic performances (backward reasoning): instead the cycle reflects the application of a powerful knowledge base. Diagnostic KBS behavior in this case is exactly the same as for the epistemological model: it expresses inference to the best explanation involving some multi-dimensional criteria of evaluation-elimination of hypotheses; the criteria that are produced at the computational level by suitable methods of representing ontological and inferential commitments of the deduction-induction phase. As stated above, medical KBSs of this kind ought not be judged by how faithfully they represent human processing because this is not the precise point. According to my colleagues Stefanelli and Ramoni (this volume), these KBSs may be considered as <169>mental prostheses [I would add `rational and objective'] that help physicians with different skills and expertise in the management of patients. Just as telescopes are designed to extend the sensory capacity of humans, KBSs are designed to extend their cognitive capacity<170>.

Finally it is important to note that there is a fruitful exchange between AI and epistemology. On the one hand, as described above, there is the need for an abductive epistemological architecture of medical KBSs, on the other hand it is important to note that there are well-known AI systems that perform explicitly epistemological tasks, such as BACON (Langley et. al., 1987), TETRAD (Glymour et al., 1987;), and PI (Holland et al., 1987). For instance Thagard (1988), in his interesting book, illustrates four kinds of abduction that have been implemented in PI, a system devoted to explaining in computational terms the main problems of the traditional philosophy of science, such as scientific discovery, explanation, evaluation, etc. He distinguishes between simple, existential, rule-forming, and analogical abduction. Simple abduction generates hypotheses about individual objects, and is similar to my selective abduction. Existential abduction postulates the existence of previously unknown objects, such as new planets, as in Peirce's example about Kepler. Rule-forming abduction generates rules that explain laws. Analogical abduction uses past cases of hypothesis formation to construct hypotheses similar to existing ones. If the pure philosophical task is to state correct rules of reasoning in an abstract and objective way, the use of computer modeling may be a rare tool in these investigations because of its rational correctness. The increase in knowledge provided by this intellectual interaction is manifest.

This cooperation should prove very fruitful from an educational perspective too: reciprocally clarifying both philosophical and AI theories of reasoning will provide new and very interesting didactic tools.

7. Basic science reasoning and clinical reasoning intertwined: consequences for medical education

From an epistemological point of view (Schaffner, 1986) biomedical sciences can be considered as a set of partially overlapping models (sometimes built at the cross-roads of several disciplines) of semi-independent phenomena dealing with prototypical cases. The role of generalizations is to use explicitly exemplars <$FExemplars are identified by Kuhn as the accepted, prototypical problems that can be encountered both when learning a discipline and in discussion of its contemporary research.>, and to capture causal relations between them, whereas the role of generalizations in the physical sciences is to give abstract laws relating to several exemplars. In the clinical biomedical sciences exemplars also concern an individual's abnormalities: <169>This implies that an important, perhaps implicit, component of medical theory involves models of normative biomedical behavior. Since that, too, may be based on sets of exemplars, we see the possibility that clinical medicine, if a scientific theory, is a theory based on models of models - clearly not a straightforward product of axioms of biology<170> (Patel, Evans & Groen, 1989a, p. 56).

The Kunhian concept of exemplars refers to the field of growth of scientific knowledge and in this sense is related to the <169>anti-theoretical<170> emphasis on problem-solving performance: <169>Philosophers of science have not ordinarily discussed the problem encountered by a student in laboratories or in science texts [...] at the start and for some time more, doing problems is learning consequential things about nature. In the absence of such exemplars, the laws and theories he has previously learned would have little empirical content.<170> (Kuhn, 1970, pp. 187-8).

In cognitive science this (and similar) types of postpositivistic objection to the formalistic excess of the neopositivistic tradition in philosophy of science are exploited to stress the relevance of the distinction between theories and their domains of application. This objection is exploited to emphasize the difference between established bodies of scientific knowledge and their processes of discovery and/or application and, in medical knowledge, between clinical reasoning (situated, concerned with attributes of people) and basic science reasoning (unsituated, concerned with attributes of entities such as organs, bacteria, or viruses).

There have been many experimental studies in cognitive psychology to elucidate the precise role of basic science in medical problem-solving in order to determine <169>(1) to what extent basic science and clinical knowledge are complementary; (2) what basic science contributes to medical problem solving; and (3) whether basic science knowledge contributes to medical expertise<170> (Patel, Evans & Groen, 1989a, p. 62; see also Patel, Evans & Kaufman, in press; Groen & Patel, 1988). The AI ways of exploiting basic science resources in ontological levels involved in the deduction-induction cycle of second generation medical KBSs are described above (section 6). This is the case with NEOAMEMIA, but applies equally to earlier medical KBSs, although differently, such as CASNET, CADUCEUS (Pople, 1985) and ABEL (Patil, 1981).

The distinction between basic medical science (and reasoning), and clinical science (and reasoning) is also included in the general problem of medical education. It is interesting that conventional curricula (CC) (where basic science courses are taught before the clinical training) and problem-based learning curricula (PBL) (where basic science is taught in the context of clinical problems and general heuristics are specifically taught) lead students, when they generate explanations, to develop respectively automatic abductions (forward reasoning) or to perform the whole deduction-induction cycle using relevant biomedical information (backward reasoning). The results of this cognitive research can be found in Patel, Groen and Norman, 1990; see also Patel, Evans and Groen, 1989b.

The aim here is to outline some basic philosophical issues that may help to clarify the problem of medical education, at least from a theoretical point of view. The problem of <169>teaching<170> science is a very old topic of philosophical reflection. Plato's Meno is a dialogue about whether virtue can be taught (Turner, 1989). The problem is related to the Meno paradox, stated by Plato in the dialogue and discussed by Simon in 1976 (Simon, 1977, pp. 338-341), and to the issue of <169>tacit knowledge<170> which was introduced by Polanyi (Polanyi, 1966). The slave boy in the dialogue is brought in to make a related point: <169>Socrates establishes a) that the boy cannot correctly answer the question (`cannot tell', in Polanyi's language), of how much larger the sides of a square with double the area of another square will be, and b) that the boy thinks he knows that if a square has twice the area the sides will also be doubled. He then leads the boy through a series of inferences, each of which the boy could `tell', at least could assent in response to Socrates' `questions' formulating those influential steps, and that he could correctly multiply and add when asked<170> (Turner, 1989, p. 85.). These queries leads the boy to the correct answer.

The story of Meno's slave can be looked at from the point of view of an epistemological argument about the paradoxical concept of <169>problem solving<170>. Polanyi thinks the Meno story shows that if all knowledge is explicit, i.e., capable of being clearly stated, then we cannot know a problem or look for its solution. It also shows that if problems nevertheless exist, and discoveries can be made by solving them, we can know things that we cannot express: <169>to search for the solution of a problem is an absurdity; for either you know what you are looking for, and then there is no problem; or you do not know what you are looking for, and then you cannot expect to find anything<170> (Polanyi, 1966, p. 22).

Plato's solution of this epistemological impasse is the very classic philosophical scenario of the doctrine of reminiscence: Socrates' teaching is in reality leading the slave to discover the knowledge he already possesses in his spirit. Simon provides a computational solution of the paradox in modern problem solving terms: <169>our ability to known what we are looking for does not depend upon our having an effective procedure for finding it: we need only an effective procedure for testing candidates<170> (Simon, 1977, p. 339). If it is possible to have an effective procedure for testing, and an effective procedure for generating candidates, we will have a <169>problem<170>, i.e. an unsolved problem, where we nevertheless <169>know what we are looking for<170> without actually possessing it. As Turner states, <169>In the case of Goldbach's conjecture, we can set up the following procedures: generate even numbers, generate numbers named by their prime factors, and make judgements of equality. The problem then can be defined as follows: `find a number k generated by the first procedure that does not belong to the numbers generated by the second procedure'. Thus the example fits the `general scheme for defining problem solutions prior to finding them'<170> (Turner, 1989, p. 86).

I digressed in order to illustrate a prototypical <169>cognitive<170> story, from philosophical to knowledge engineering outcomes. Socrates teaches the slave some geometric issues in a problem-oriented fashion, not a theorematic one (but this is before Euclid's Elements). He shows the slave some inferential routines and subroutines (for recognizing numerical inconsistency or for calculating area, for instance) for generating and testing (in Simon's terms) that enable him to self-program (or <169>learn<170>) and solve the problem, thus coming to know new geometric notions.

These observations delineate the centrality of the concept of problem-solving in teaching and learning. There is no longer room for a philosophical doctrine of reminiscence. New developments consist of benefiting from recent rational clarifications of problem-solving and problem-oriented knowledge due to AI and cognitive science. Thus the philosophical story above introduces the main methodological issues in medical education. In medical training the following ideas need to be emphasized and added to conventional curricula:

(1) the relevance of problem-oriented teaching and learning, as different from conventional basic science-centred education, and its relations and interaction in education itself and in reasoning performances;

(2) the need for an epistemological and logical (didactic) awareness of the main methodological topics (for instance, abduction) incorporated in reasoning for diagnosis, therapy planning and monitoring;

(3) the role of KBSs, tutoring systems (Clancey, 1986) and other technological products in allowing, for instance, students to browse ontologies that express stored basic medical knowledge and to see reasoning processes displayed separately and explicitly during computational problem-solving.

8. Conclusion

The concept of abduction is philosophically very powerful: I have sought to show in the previous pages its efficacy in unifying many intellectual areas devoted to the clarification of problem solving processes and medical reasoning. In my opinion these abductive schemes may form a forceful language capable of establishing a relatively solid and objective framework that increases the intelligibility of many cognitive phenomena.

Peirce's insight about the <169>inferential<170> virtues of abduction has been proved very far-sighted. AI, logical and cognitive studies of problem solving processes have guaranteed the philosophical centrality of abduction in present-day cultural, scientific and technological developments. Simon's observation that abduction <169>is the main subject of the theory of problem solving<170> has been ratified. This centrality attracts a cluster of related topics, from logic of discovery to evaluation skills, from nonmonotonic logic to medical reasoning, from AI systems to the detection outlook in narrative contexts (Eco & Sebeok, 1983).

Medical reasoning is unified by the related notion of selective abduction because this kind of reasoning explains and executes the three generic tasks of diagnosis, therapy planning, and monitoring, correctly establishing the level of evaluation procedures and ontological medical complexity. The relevance of abduction ensures it a prominent role in methodological aspects of medical education and practice. Moreover it seems that the idea of abductive reasoning might be a flexible epistemological interface between other related notions (induction and deduction, best explanation, perception, forward and backward reasoning, defeasibility, discovery, and so on) all of which are involved in medical reasoning but, at the same time, are of great theoretical interest in general.

 

References

Anderson, D.R. (1987). Creativity and the Philosophy of Charles Sanders Peirce. Oxford: Clarendon Press.

Buchanan, B.G. (1985). Steps Toward Mechanizing Discovery. In K.F. Schaffner, Logic of Discovery and Diagnosis in Medicine (pp. 94-114). Berkeley and Los Angeles, CA: University of California Press.

Chandrasekaran, B. (1983). Towards a Taxonomy of Problem Solving Types. AI Magazine, 4, 9-17.

Chandrasekaran, B. & Mittal, S. (1982). Deep Versus Compiled Knowledge in Diagnostic Problem Solving. In Proceedings of the National Conference on Artificial Intelligence (pp. 349-354).

Charniak, E. & McDermott, D. (1985). Introduction to Artificial Intelligence. Reading, MA: Addison-Wesley.

Clancey, W.J. (1985). Heuristic Classification. Artificial Intelligence, 27, 289-350.

Clancey, W.J. (1986). From GUIDON to NEOMYCIN and HERACLES in Twenty Short Lessons. (ONR Final Report 1979-1985). AI Magazine, 7(3), 40-60.

Clancey, W.J. (1989). The Frame of Reference Problem in the Design of Intelligent Machines. Draft. To appear in K. vanLehn & A. Newell, Architectures for Intelligence: The Twenty-Second Carnegie Symposium on Cognition. Hillsdale, NJ: Lawrence Erlbaum.

Doyle, J. (1988). Artificial Intelligence and Rational Self-Government. Technical Report No. CMU-CS-88-124, Computer Science Department, Carnegie Mellon University, Pittsburg.

Doyle, J. (1989). Constructive Belief and Rational Representation. Computational Intelligence, 5, 1-11.

Eco, U. & Sebeok, T.A. (1983). The Sign of Three. Holmes, Dupin, Peirce. Bloomington, IN: Indiana University Press

Evans, D.A. & Gadd, C.S. (1989). Managing Coherence and Context in Medical Problem-Solving Discourse. In D.A. Evans & V.L. Patel (eds.), Cognitive Science in Medicine. Biomedical Modeling (pp. 211-255). Cambridge, MA: The MIT Press.

Fetzer, J.H. (1990). Artificial Intelligence: Its Scope and Limits. Dordrecht: Kluwer Academic Publishers.

Finin, T. & Morris, G. (1989). Abductive Reasoning in Multiple Faults Diagnosis. Artificial Intelligence Review, 3, 129-158.

Glymour, C., Scheines, R., Spirtes, P. & Kelly, K. (1987). Discovering Causal Structure. San Diego, CA: Academic Press.

Glymour, C., (1989). When Less Is More. In D. A. Evans & V.L. Patel (eds.), Cognitive Science in Medicine. Biomedical Modeling (pp. 349-267). Cambridge, MA: The MIT Press.

Groen, G.J. & Patel, V.L. (1988). The Relationship Between Comprehension and Reasoning in Medical Expertise. In M.T.H. Chi, R. Glaser & M.J. Farr (eds.), The Nature of Expertise (pp. 287-310). Hillsdale, NJ: Lawrence Erlbaum.

Hanson, N.R. (1958). Patterns of Discovery. An Inquiry into the Conceptual Foundations of Science. Cambridge: Cambridge University Press.

Harman, G. (1986). Change in View. Principles of Reasoning. Cambridge, MA: The MIT Press.

Holland, K.J., Holyoak, K.J., Nisbett, R.E. & Thagard, P.R. (1987). Induction. Processes of Inference, Learning, and Discovery. Cambridge, MA: The MIT Press.

Hunt, E. (1989). Cognitive Science: Definition, Status, and Questions. Annals Review of Psychology, 40, 603-629.

Ironi, L., Stefanelli, M. & Lanzola, G. (in press). Qualitative Reasoning in Medical Diagnosis. Artificial Intelligence in Medicine.

Jackson, P. (1989). Propositional Abductive Logic. In Proceedings of the Seventh AISB (pp. 89-94).

Jackson, P. (1990). Abduction and Conterfactuals. In Working Notes: AAAI Spring Symposium on Automated Abduction (pp. 77-81). Stanford University.

Johnson, L. & Keravnou, E.T. (1988). Expert Systems Architectures. London: Kogan Page.

Josephson, J.R., Chandrasekaran, B., Smith J.W. Jr & Tanner, M.C. (1986). Abduction by Classification and Assembly. In PSA 1986, vol. 1 (pp. 458-470). Philosophy of Science Association.

Konolige, K. (1990). Towards a General Theory of Abduction. In Working Notes: AAAI Spring Symposium on Automated Abduction (pp. 62-66). Stanford University.

Kuhn, T.S. (1970). The Structure of Scientific Revolutions (2nd Edition). Chicago, IL: University of Chicago Press.

Kuipers, B.J. (1986). Qualitative Simulation. Artificial Intelligence, 29, 280-338.

Kuipers, B.J. (1987). Qualitative Simulation as Causal Explanation. IEEE Transactions on Systems, Man, and Cybernetics, 17, 432-444.

Langley, P., Simon, H.A., Bradshaw, G.L. & Zytkow, J.M. (1987). Scientific Discovery. Computational Explorations of the Creative Processes. Cambridge, MA: The MIT Press.

Lanzola, G., Stefanelli, M., Barosi, G. & Magnani, L. (1990) NEOANEMIA: A Knowledge-Based System Emulating Diagnostic Reasoning. Computers and Biomedical Research, 23, 560-582.

Levesque, H.J. (1989). A Knowledge-Level Account of Abduction. In Proceedings of the Eleventh IJCAI (pp. 1061-1067). Los Altos, CA: Morgan Kaufmann.

Lukasiewicz, J. (1970). Creative Elements in Science [1912]. In J. Lukasiewicz, Selected Works (pp. 12-44). Amsterdam: North Holland.

Magnani, L. (1988). Epist‚mologie de l'invention scientifique. Communication & Cognition, 21, 273-291.

Milne, R. (1987). Strategies for Diagnosis. IEEE Transactions on Systems, Man, and Cybernetics, 17, 333-339.

Patel, V.L., Evans, D.A. & Groen, G.J. (1989a). Biomedical Knowledge and Clinical Reasoning. In D.A. Evans & V.L. Patel (eds.), Cognitive Science in Medicine. Biomedical Modeling (pp. 53-112). Cambridge, MA: The MIT Press.

Patel, V.L., Evans, D.A. & Groen, G.J. (1989b). Reconciling Basic Science and Clinical Reasoning. Teaching and Learning in Medicine, 1(3), 116-121.

Patel, V.L., Evans, D.A. & Kaufman, D.R. (1989). A Cognitive Framework for Doctor-Patient Interaction. In D.A. Evans & V.L. Patel (eds.), Cognitive Science in Medicine. Biomedical Modeling (pp. 257-312). Cambridge, MA: The MIT Press.

Patel, V.L., Evans, D.A. & Kaufman, D.R. (in press). Reasoning Strategies and the Use of Biomedical Knowledge by Medical Students. Medical Education.

Patel, V.L. & Groen, G.J. (in press). The General and Specific Nature of Medical Expertise: A Critical Look. In A. Ericsson & J. Smith (eds.), Study of Expertise: Prospects and Limits. Cambridge: Cambridge University Press.

Patel, V.L., Groen, G.J. & Norman, G.R. (1990). Reasoning and Instruction in Medical Curricula. Technical Report No. CME90-CS9, Centre for Medical Education, McGill University, Montreal.

Patil, R.S. (1981). Causal Understanding of Patient IIlness for Electrolyte and Acid-Base Diagnosis, Technical Report MIT-CSL-TR-267, Computer Science Laboratory, Massachusetts Institute of Technology, Cambridge, MA.

Peirce, C.S. (1931-1958). Collected Papers, 8 vols. Edited by C. Harstone, P. Weiss & A. Burks. Cambridge, MA: Harvard University Press.

Peirce, C.S. (1955). Abduction and Induction. In Philosophical Writings of Peirce (pp. 150-156). New York, NY: Dover.

Peng, I. & Reggia, J.A. (1987a). A Probabilistic Causal Model for Diagnostic Problem Solving I: Integrating Symbolic Causal Inference with Numeric Probabilistic Inference. IEEE Transactions on Systems, Man, and Cybernetics, 17, 146-162.

Peng, I. & Reggia, J.A. (1987b). A Probabilistic Causal Model for Diagnostic Problem Solving II: Diagnostic Strategy. IEEE Transactions on Systems, Man, and Cybernetics, 17, 395-406.

Polanyi, M. (1966). The Tacit Dimension. Garden City, NY: Doubleday.

Poole, D. (1989). Explanation and Prediction: An Architecture for Default and Abductive Reasoning". Computational Intelligence, 5, 97-110.

Poole, D. & Rowen, G.M. (1990). What Is an Optimal Diagnosis? In Proceedings of the Sixth Conference on Uncertainty in AI (pp. 46-53).

Pople, H.E. (1973). On the Mechanization of Abductive Logic". In Proceedings of the International Joint Conference on Artificial Intelligence 8 (pp. 147-152).

Pople, H.E. (1977). The Formation of Composite Hypotheses in Diagnostic Problem Solving. In Proceedings of the Fifth IJCAI (pp. 1030-1037). Los Altos, CA: Morgan Kaufmann.

Pople, H.E. (1982). Heuristic Methods for Imposing Structure on Ill-Structured Problems: The Structuring of Medical Diagnostics. In P. Szolovits (ed.), Artificial Intelligence in Medicine (pp. 119-190). Boulder, CO: West View Press.

Pople, H.E. (1985). Evolution of An Expert System: from INTERNIST to CADUCEUS", In I. De Lotto & M. Stefanelli (eds.), Artificial Intelligence in Medicine (pp. 179-208). Amsterdam: Elsevier Science Publisher.

Quaglini, S., Berzuini, C., Bellazzi, R., Stefanelli, M. & Barosi, G. (1989). Therapy Planning by Combining AI and Decision Theoretic Techniques. In Proceedings of the European Conference on Artificial Intelligence in Medicine (pp. 125-134).

Quaglini, S., Bellazzi, R., Berzuini, C., Stefanelli, M. & Barosi, G. (1990). A Control Blackboard Architecture for Therapy Planning. Technical Report No. RIDIS-53-90. Dipartimento di Informatica e Sistemistica, Universit... di Pavia, Pavia, Italy.

Ramoni, M. & Magnani, L. (1989). Una teoria formale del ragionamento diagnostico. In Atti del Primo Congresso della Associazione Italiana per l'Intelligenza Artificiale (pp. 267-273). Genoa, Italy: Cenfor.

Ramoni, M., Stefanelli, M., Magnani, L. & Barosi, G. (1990). An Epistemological Framework for Medical Knowledge Based Systems. Technical Report No. RIDIS-41-90, Dipartimento di Informatica e Sistemistica, Universit... di Pavia, Pavia, Italy.

Reggia, J.A., Dana, S.N. & Pearl, Y.W. (1983). Expert Systems Based on Set Covering Model. In International Journal on Man-Machine Studies, 19, 443-460.

Reggia, J.A. & Nau, D.S. (1984). An Abductive Non Monotonic Logic. In Proceedings of the Workshop on Non-Monotonic Reasoning (pp. 385-385).

Reiter, R. (1987). A Theory of Diagnosis from First Principles. Artificial Intelligence, 32, 57- 95.

Schaffner, K.F. (1986). Exemplar Reasoning about Biological Models and Diseases: A Relation between the Philosophy of Medicine and Philosophy of Science. Journal of Medicine & Philosophy, 11, 63-80.

Simon, H.A. (1965). The Logic of Rational Decision. British Journal for the Philosophy of Science, 16, 169-186. Reprinted in H.A. Simon (1977) (pp. 137-153).

Simon, H.A. (1966). Thinking by Computers. In R. Colodny (ed.), Mind and Cosmos (pp. 2-21). Pittsburgh, PA: Pittsburgh University Press. Reprinted in H.A. Simon (1977) (pp. 268-285).

Simon, H.A. (1976). The Meno Paradox. Philosophy of Science, 43, 147-151. Reprinted in H.A. Simon (1977) (pp. 338-341).

Simon, H.A. (1977). Models of Discovery and Other Topics in the Methods of Science . Dordrecht: Reidel.

Simon, H.A. (1985). Artificial-Intelligence Approaches to Problem Solving and Clinical Diagnosis. In K.F. Schaffner, Logic of Discovery and Diagnosis in Medicine (pp. 72-93). Berkeley and Los Angeles, CA: University of California Press.

Sowa, J.F. (1984). Conceptual Structures. Information Processing in Mind and Machine. Reading, MA: Addison-Wesley.

Steels, L. (1984). Second Generation Expert Systems. Journal on Future Generation Computer, 1, 213- 237.

Stefanelli, M., Lanzola, G., Barosi, G. & Magnani, L. (1988). Modelling of Diagnostic Reasoning. In C. Cobelli & L. Mariani (eds.), Modeling and Control in Biomedical Systems (pp. 163-174). Oxford: Pergamon. Preprints.

Stefanelli, M. & Ramoni, M. (this volume). Epistemological Constraints on Medical Kwowledge Based Systems.

Thagard, P. (1988). Computational Philosophy of Science. Cambridge, MA: The MIT Press.

Turner, S.P. (1989). Tacit Knowledge and the Problem of Computer Modelling Cognitive Processes in Science. In S. Fuller, M. De Mey, T. Shinn & S. Woolgar (eds.), The Cognitive Turn. Sociological and Psychological Perspectives on Science (pp. 83-94). Dordrecht: Kluwer Academic Publishers.