INCONSISTENCIES AND CREATIVE ABDUCTION IN SCIENCE

Lorenzo Magnani

Department of Philosophy; University of Pavia; I - 27100 Pavia; Italy

Philosophy, Science, and Technology Program; Georgia Institute of Technology; Atlanta; USA

lorenzo@philos.unipv.it

Abstract

The paper illustrates abductive reasoning and its formal models in order to classify and analyze the different roles played by inconsistencies in different reasoning tasks and scientific discovery. The aim is to identify aspects of inconsistencies not covered by certain formalism and to suggest extensions to present thinking, but also to delineate the first features of a broader constructive formal framework able to include abduction and to provide constructive solutions to some of the limitations of its formal model. Many ways of "governing" inconsistencies are considered: from the methods activated in diagnostic settings and consistency-based models to the typical ones embedded in some forms of creative reasoning, from the interpretations in terms of conflicts and competitions to the actions performed on empirical and conceptual anomalies, from the question of generating inconsistencies by radical innovation to the connectionist treatment of coherence. The conclusions presented here aim to represent a step forward in the understanding of the use of inconsistencies in scientific creativity.

1 Abduction: creative, selective, model-based, to the best explanation

This section illustrates abductive reasoning in order to resume some basic epistemological and cognitive results I have analyzed elsewhere (Magnani, 1988, 1992, 1999a; Ramoni et al., 1992).

In different theoretical changes we witness different kinds of discovery processes operating. Discovery methods are data-driven (generalizations from observation and from experiments), explanation-driven (abductive), and coherence-driven (formed to overwhelm contradictions). Sometimes there is a mixture of such methods: for example, an hypothesis devoted to overcome a contradiction is found by abduction. Therefore, contradiction and its reconciliation play an important role in philosophy, in scientific theories and in all kinds of problem-solving. It is the driving force underlying change (thesis, antithesis and synthesis) in the Hegelian dialectic and the main tool for advancing knowledge (conjectures and refutations and proofs and counter-examples) in the Popperian philosophy of science and mathematics. When a contradiction arises, consistency can be restored by rejecting or modifying any assumption which contributes to the derivation of contradiction: no hypothesis is immune from possible alteration. Hence, the derivation of inconsistency contributes to the search for alternative, and possibly new hypotheses.

Many reasoning conclusions that do not proceed in a deductive manner are abductive. Hence, abduction is the process of inferring certain facts and/or laws that render some sentences plausible, that explain some phenomenon or observation; it is the process of reasoning in which explanatory hypotheses are formed and evaluated. I have developed with others an epistemological model of medical reasoning, called the Select and Test Model (ST-MODEL) which can be described in terms of the classical notions of abduction, deduction and induction: it describes the different roles played by such basic inference types in developing various kinds of medical reasoning (diagnosis, therapy planning, monitoring) but can be regarded also as an illustration of scientific theory change. There are two main epistemological meanings of the word abduction: 1) abduction that only generates plausible hypotheses (selective or creative) and 2) abduction considered as inference to the best explanation, that also evaluates hypotheses. All we can expect of our "selective" abduction, is that it tends to produce hypotheses that have some chance of turning out to be the best explanation. Selective abduction will always produce hypotheses that give at least a partial explanation and therefore have a small amount of initial plausibility. In the view concerning abduction as inference to the best explanation advocated by Peirce one might require that the final chosen explanation be the most plausible.

We should remember, as Peirce noted, that abduction plays a role even in relatively simple visual phenomena. Visual abduction, a special form of abduction, occurs when hypotheses are instantly derived from a stored series of previous similar experiences. It covers a mental procedure that tapers into a non-inferential one, and falls into the category called "perception". Philosophically, perception is viewed by Peirce as a fast and uncontrolled knowledge-production procedure. Perception, in fact, is a vehicle for the instantaneous retrieval of knowledge that was previously structured in our mind through inferential processes. By perception, knowledge constructions are so instantly reorganized that they become habitual and diffuse and do not need any further testing. Many visual stimuli are ambiguous, yet people are adept at imposing order on them: "We readily form such hypotheses as that an obscurely seen face belongs to a friend of ours, because we can thereby explain what has been observed". This kind of image-based hypothesis formation can be considered as a form of visual abduction (Magnani et al., 1992).

We have to say that visual and analogical reasoning are productive in scientific concept formation too; scientific concepts do not pop out of heads, but are elaborated in a problem solving process that involves the application of various procedures: this process is a reasoned process. Visual abduction, but also many kinds of abductions involving analogies, diagrams, thought experimenting, etc., can be called model-based.

Following Nersessian, I use the term "model-based reasoning" to indicate the construction and manipulation of various kinds of representations, not necessarily sentential and/or formal. Obvious examples of model-based reasoning are constructing and manipulating visual representations, thought experiment, analogical reasoning, but also the so-called "tunnel effect" (Cornuéjols et al., in press), occurring when models are built at the intersection of some operational interpretation domain - with its interpretation capabilities - and a new ill-known domain.

In the following section many ways of "governing" inconsistencies are considered: from the methods activated in diagnostic settings and consistency-based models to the typical ones embedded in some forms of creative reasoning, from the interpretations in terms of conflicts and competitions to the actions performed on empirical and conceptual anomalies, from the question of generating inconsistencies by radical innovation to the connectionist treatment of coherence. The conclusions presented here aim to represent a step forward in the understanding of the use of inconsistencies in scientific creativity.

2 Inconsistencies and creative abduction in science

Many attempts have been made to model abduction by developing some formal tools in order to illustrate its computational properties and the relationships with the different forms of deductive reasoning (see, for example, Bylander et al., 1991). Some of the more recent formal models of abductive reasoning are based on the theory of the epistemic state of an agent (Boutilier and Becher, 1995), where the epistemic state of an individual is modeled as a consistent set of beliefs that can change by expansion and contraction (belief revision framework). I will discuss the nature of the kinds of inconsistencies captured by these formalisms and show how they do not adequately account for some roles played by anomalies, conflicts, and contradictions in many forms of explanatory reasoning.

The idea of consistency that underlies some of the more recent deductive consistency-based models of selective abduction (diagnostic reasoning) is the following: any inconsistency (anomalous observation) refers to an aberrant behavior that can usually be accounted for by finding some set of components of a system that, if behaving abnormally, will entail or justify the actual observation. The observation is anomalous because it contradicts the expectation that the system involved is working according to specification. This types of deductive models go beyond the mere treatment of selective abduction in terms of preferred explanations and include the role of those components whose abnormality makes the observation (no longer anomalous) consistent with the description of the system (Boutilier and Becher, 1995; Magnani, 1999a).

The changes of state have to be epistemologically conservative: the agent maintains as many of its beliefs as possible when it adjusts its beliefs to the new information, thus following Quine’s idea of "minimun mutilation" (Quine, 1979).

We have now to notice some limitations of the formal models in accounting for other kinds of inconsistencies embedded in many reasoning tasks: if we want to deal with the nomological and most interesting creative aspects of abduction we are first of all compelled to consider the whole field of the growth of scientific knowledge cited above. We may also see belief change from the point of view of conceptual change, considering concepts either cognitively, like mental structures analogous to data structures in computers, or, epistemologically, like abstractions or representations that presuppose questions of justification.

Let us consider concepts as composite structures akin to frames of the following sort:

CONCEPT:

A kind of:

Subkinds:

A part of:

Parts:

Synonyms:

Antonyms:

Rules:

Instances:

It is important to emphasize (1) kind and part-whole relations that institute hierarchies, and (2) rules that express factual information more complex than simple slots. To understand the cases of conceptual revolutions we need to illustrate how concepts can fit together into conceptual systems and what is involved in the replacement of such systems. Conceptual systems can be viewed as ordered into kind-hierarchies and linked to each other by rules. Belief revision is able to represent cases of conceptual change such as adding a new instance, adding a new weak rule, adding a new strong rule (Thagard, 1992), that is, cases of addition and deletion of beliefs, but fails to take into account cases such as adding a new part-relation, adding a new kind-relation, adding a new concept, collapsing part of a kind-hierarchy, reorganizing hierarchies by branch jumping and tree switching, in which there are reorganizations of concepts or redefinitions of the nature of a hierarchy.

Adding new part-relations occurs when in the part-hierarchy new parts are discovered: an example is given by the introduction of new molecules, atoms, and subatomic particles. Thomson’s discovery that the "indivisible" atom contains electrons was very sensational.

Adding new kind-relations occurs when it is added a new superordinate kind that combines two or more things previously taken to be distinct. In the nineteenth century scientists recognized that electricity and magnetism were the same and constructed the new concept of electromagnetism. Another case is shown by differentiation, that is the making of a new distinction that generates two kinds of things (heat and temperature were considered the same until the Black’s intervention).

The last three types of conceptual change can be illustrated by the following examples. The Newtonian abandon of the Aristotelian distinction between natural and unnatural motion exemplifies the collapse of part of the kind-hierarchy. Branch jumping occurred when the Copernican revolution involved the recategorization of the earth as a kind of planet, when previously it had been considered special, but also when Darwin reclassified humans as a kind of animal. Finally, we have to say that Darwin not only reclassified humans as animals, he modified the meaning of the classification itself. This is a case of hierarchical tree redefinition: "Whereas before Darwin kind was a notion primarily of similarity, his theory made it a historical notion: being of common descent becomes at least as important to being in the same kind as surface similarity. Einstein’s theory of relativity changed the nature of part-relations, by substituting ideas of space-time for everyday notions of space and time" (Thagard, 1992, p. 36).

These last cases are the most evident changes occurring in many kinds of creative reasoning, for example in science, when adopting a new conceptual system is more complex than mere belief revision. Related to some of these types of conceptual change are different varieties of inconsistencies.

2.1 Finding inconsistencies: empirical and conceptual anomalies

It may be said that logical accounts of abduction certainly illustrate much of what is important in abductive reasoning, especially the objective of selecting a set of hypotheses (diagnoses, causes) that are able to dispense good (preferred) explanations of data (observations), but fail in accounting for many cases of explanations occurring in science or in everyday reasoning. For example they do not capture 1. the role of statistical explanations, where what is explained follows only probabilistically and not deductively from the laws and other tools that do the explaining; 2. the sufficient conditions for explanation; 3. the fact that sometimes the explanations consist of the application of schemas that fit a phenomenon into a pattern without realizing a deductive inference; 4. the idea of the existence of high-level kinds of creative abductions I cited above; 5. the existence of model-based abductions (for instance visual and diagrammatic); 6. the fact that explanations usually are not complete but only furnish partial accounts of the pertinent evidence.

Moreover, the logical accounts of abduction certainly elucidate many kinds of inconsistency government, which nevertheless reduce to the act of finding contradictions able to generate the withdrawal of some hypotheses, beliefs, reasons, etc.: these contradictions always emerge at the level of data (observations), and consistency is restored at the theoretical level. This view may distract from important aspects of other kinds of reasoning that involve intelligent abductive performances. Empirical anomalies result from data that cannot currently be fully explained by a theory. They often derive from predictions that fail, which implies some element of incorrectness in the theory. In many machine discovery programs failed predictions drive the mechanism which selects new experiments to guess new hypotheses (Zytkow, 1997). In general terms, many theoretical constituents may be involved in accounting for a given domain item (anomaly) and hence they are potential points for modification. The detection of these points involves defining which theoretical constituents are employed in the explanation of the anomaly. Thus, the problem is to investigate all the relationships in the explanatory area.

Empirical anomalies are not alone in generating impasses, there are also the so-called conceptual anomalies. The so-called conceptual problems represent a particular form of anomaly. In addition, resolving conceptual problems may involve satisfactorily answering questions about the nature of theoretical entities. Nevertheless such conceptual problems do not arise directly from data, but from the nature of the claims in the principles or in the hypotheses of the theory. It is far from simple to identify a conceptual problem that requires a resolution, since, for example, a conceptual problem concerns the adequacy or the ambiguity of a theory, and yet also its incompleteness or (lack of) evidence.

The formal sciences are especially concerned with conceptual problems. Let’s consider an example deriving from the well-known case of the non-Euclidean revolution, which plays a remarkable role in illustrating some actual transformations in rational conceptual systems. The discovery of non-Euclidean geometries involves some interesting cases of visual abductive reasoning. It demonstrates a kind of visual abduction, as a strategy for anomaly resolution related to a form of explanatory and productive visual thinking.

Since ancient times the fifth postulate has been held to be not evident. This "conceptual problem" (just an anomaly) has caused much suspicion about the reliability of the whole theory of parallels, consisting of the theorems that can be only derived with the help of the fifth postulate. The recognition of this anomaly was fundamental to the development of the great non-Euclidean revolution. Two thousand years of attempts to resolve the anomaly have generated many more-or-less fallacious demonstrations of the fifth postulate (for example, a typical attempt was that of trying to prove the fifth postulate from the others), until the discovery of non-Euclidean geometries (Greenberg, 1980).

I have presented elsewhere (Magnani, 1999b) some examples derived from the historical discovery of non-Euclidean geometries which illustrate the relationships between strategies for anomaly resolution and explanatory and productive visual thinking: I consider how Lobachevsky’s strategy for resolving the anomaly of the fifth postulate was to manipulate the symbols, rebuild the principles, and then to derive new proofs and provide a new mathematical apparatus. The failure of the demonstrations of his predecessors induced Lobachevsky to believe that the difficulties that had to be overcome were due to causes other than those which had until then been focussed on. I have shown how some of the hypotheses created by Lobachevsky were mostly image-based trying to demonstrate that visual abduction is relevant to hypothesis formation and scientific discovery.

The fact that inconsistencies may occur also at the theoretical level is further emphasized if we consider that in science or in legal reasoning, hypotheses are mainly layered, contrarily to the case of diagnostic reasoning, where we have a set of data that can be explained by a given set of diseases (that is with the explanation consisting of a mapping from the latter to the former). Hence, the organization of hypotheses is more complex than the one illustrated in formal models, and abduction is not only a matter of mapping from sets of hypotheses to a set of data.

In many abductive settings there are hypotheses that explain other hypotheses so that the selection or creation of explanations is related to these relationships. In this case the plausibility of the hypothesis comes not only from what it explains, but also from it itself being explained. The Darwinian hypothesis stating that "Species of organic beings have evolved" gains plausibility from the many pieces of evidence it helps to explain. Moreover, it receives plausibility from above, from being explained by the hypothesis of natural selection, in its turn explained by the hypothesis concerning the struggle for existence. The principle of special relativity and the principle of the constancy of the speed of light explain (in this case the explanatory relation is "deductive") the Lorentz transformation, which explains the negative result of the Michelson-Morley experiment, but also they explain the convertibility of mass and energy which explains the nuclear transmutations detected by Rutherford in 1919. Hence the two principles explain the two experiments above by means of the intermediate layered hypotheses of Lorentz transformation and mass/energy conversion, but we also know the two principles directly explain the Fizeau experiment concerning the speed of light in a flowing fluid (Einstein, 1961).

In some machine discovery programs the question of layered hypotheses could be related to the one of postulating hidden structures where some hidden hypotheses can trigger discovery of other hypotheses at a higher level.

2.2 Generating inconsistencies by radical innovation

The case of conceptual change such as adding a new part-relation, adding a new kind-relation, adding a new concept, collapsing part of a kind-hierarchy, reorganizing hierarchies by branch jumping and tree switching, in which there are reorganizations of concepts or redefinitions of the nature of a hierarchy are the most evident changes occurring in many kinds of creative reasoning, for instance in the growth of scientific knowledge.

When a scientist introduces a new hypothesis, especially in the field of natural sciences, he is interested in the potential rejection of an old theory or of an old knowledge domain. Consistency requirements in the framework of deductive models, governing hypothesis withdrawal in various ways, would arrest further developments of the new abduced hypothesis. In the scientist’s case there is not the deletion of the old concepts, but rather the coexistence of two rival and competing views.

Consequently we have to consider this competition as a form of epistemological, and non logical inconsistency. For instance two scientific theories are conflicting because they compete in explaining shared evidence.

The problem has been studied in Bayesian terms but also in connectionist ones, using the so-called theory of explanatory coherence (Thagard, 1992), which deals with the epistemological (but sometimes pragmatical) reasons for accepting a whole set of explanatory hypotheses conflicting with another one. In some cognitive settings, such as the task of comparing a set of hypotheses and beliefs incorporated in a scientific theory with the one of a competing theory, we have to consider a very complex set of criteria (to ascertain which composes the best explanation), that goes beyond the mere simplicity or explanatory power. The minimality criteria included in some of the formal accounts of abduction, or the idea of the choice among preferred models cited above, are not sufficient to illustrate more complicated cognitive situations.

2.3 Maintaining inconsistencies

As noted above, when we create or produce a new concept or belief that competes with another one, we are compelled to maintain the derived inconsistency until the possibility of rejecting one of the two becomes feasible. We cannot simply eliminate a hypothesis and then substitute it with one inconsistent with it, because until the new hypothesis comes in competition with the old one, there is no reason to eliminate the old one. Other cognitive and epistemological situations present a sort of paraconsistent behavior: a typical kind of inconsistency maintenance is the well-known case of scientific theories that face anomalies. As noted above, explanations are usually not complete but only furnish partial accounts of the pertinent evidence: not everything has to be explained.

Newtonian mechanics is forced to cohabit with the anomaly of perihelion of Mercury until the development of the theory of relativity, but it also has to stay with its false prediction about the motion of Uranus. In diagnostic reasoning too, it is necessary to make a diagnosis even if many symptoms are not explained or remain mysterious. In this situation we again find the similarity between reasoning in the presence of inconsistencies and reasoning with incomplete information already stressed. Sometimes scientists may generate the so-called auxiliary hypotheses (Lakatos, 1970), justified by the necessity of overcoming these kinds of inconsistencies: it is well-known that the auxiliary hypotheses are more acceptable if able to predict or explain something new (the making of the hypothesis of the existence of another planet, Neptune, was a successful way - not an ad hoc manoeuvre - of eliminating the anomaly of the cited false prediction).

To delineate the first features of a constructive formal framework that can handle the coexistence of inconsistent theories (and unify many of the themes concerning the limitations of formal models of abductions previously illustrated) we have first of all to be able to deal with the treatment of non verbal representations (that is model-based representations, see above, Sect. 1).

Moreover, I think that the problem of coexistence of inconsistent scientific theories and of reasoning from inconsistencies in scientific creative processes leads to analyze the characters of what I call the best possible information of a situation. It is also necessary to distinguish between the dynamic and the static sides of the best possible information. If we stress the sequential (dynamic) aspects we are more oriented to analyze anomalies as triggers for hypotheses: as illustrated by the traditional deductive models of abduction (see above), the problem concerns the abductive steps of the sequential comprehension and integration of data into a hypothetical structure that represents the best explanation for them. On the contrary, if we consider the holistic (static) aspects we are more interested in the coexistence of inconsistencies as potential sources of different reasoned creative processes. In this last case we have to deal with model-based abduction and its possible formal treatment; I plan to derive some suggestions from the area of paraconsistent and adaptive logic (Meheus, in press), for instance handling hierarchies of inconsistent models of a given representation.

When the holistic representation concerns the relationship between two competing theories containing some inconsistencies, a formal framework can be given by the connectionist tradition using a computational reconstruction of the epistemological concept of coherence, as already stated (see also the following section).

2.4 Contradicting, conflicting, failing

Considering the coherence of a conceptual system as a matter of the simultaneous satisfaction of a set of positive and negative constraints leads to the connectionist models (also in computational terms) of coherence. In this light logical inconsistency becomes a relation that furnishes a negative constraint and entailment becomes a relation that provides a positive constraint. For example, as already noted, some hypotheses are inconsistent when they simply compete, when there are some pragmatic incompatibility relations, when there are incompatible ways of combining images, etc. (Thagard and Shelley, 1997; Thagard and Verbeurgt, 1998).

From the viewpoint of the connectionist model of coherence, it spontaneously allows the situations in which there is a set of accepted concepts containing an inconsistency, for example in the case of anomalies: the system at hand may at any rate have a maximized coherence, when compared to another system. Moreover, an interesting case is the relation between quantum theory and general relativity, two theories which individually have enormous explanatory coherence. The basic problem in modern physics is that these two theories are incompatible: quantum theory and general relativity may be incompatible, but it would be folly given their independent evidential support to suppose that one must be rejected.

Contradiction has a preference for strong hypotheses which are more easily falsified than weak ones; and moreover, hard hypotheses may more easily weakened than weak ones, which prove difficult subsequently to strengthen: but hypotheses may be unfalsifiable. In this case it is impossible to find a contradiction from the empirical point of view but also from the theoretical point of view, in some area of the conceptual systems in which they are incorporated. Notwithstanding this fact, it is sometimes necessary to construct ways of rejecting the unfalsifiable hypothesis at hand by resorting to some external forms of negation, external because we want to avoid any arbitrary and subjective elimination, which would be rationally or epistemologically unjustified. Elsewhere (Magnani, 1999a) I have considered a kind of "weak" hypothesis that is hard to negate and the ways for making it easy. In these cases the subject can rationally decide to withdraw his hypotheses even in contexts where it is impossible to find "explicit" contradictions; moreover, thanks to the new information reached simply by finding this kind of negation, the subject is free to abduce new hypotheses. I have explored whether negation as failure can be employed to model hypothesis withdrawal in Freudian analytic reasoning and in Poincaré’s conventionalism of the principles of physics, showing how conventions can be rationally abandoned.

A situation that is specular to inconsistency maintenance (see Sect. 2.3) is given when two theories are not intertranslatable and observationally equivalent, as illustrated by the epistemology of conventionalist tradition. In these cases they are unconcerned by inconsistencies (and therefore by crucial experiments, they are unfalsifiable) but have to be seen as rivals. The role of observational and formal-structural invariants in providing comparability is central: it is impossible to find a contradiction in some area of the conceptual systems they express. I think that it is necessary to study in general the reasons able to model the demise of such observationally equivalent "conventional" theories, showing how they can be motivationally abandoned. This problem has been frequently stressed in the area of automated discovery: if many hypothetical patterns are discovered, all justified by their observational consequences, what are the reasons to claim that one of them is the best (Zytkow and Fischer, 1996).

Acknowledgements

Many thanks to Nancy J. Nersessian, Paul Thagard, and the anonymous referees for their helpful suggestions.

References

C. Boutilier and V. Becher. Abduction as belief revision. Artificial intelligence, 77:43-94, 1995.

T. Bylander, D. Allemang, M. C. Tanner and J. R. Josephson. The computational complexity of abduction. Artificial Intelligence, 49:25-60, 1991.

A. Cornuéjols, A. Tiberghien and G. Collet. A new mechanism for transfer between conceptual domains in scientific discovery and education. In Philosophica, Special Issue "Model-based Reasoning in Scientific Discovery", ed. by L. Magnani et al., in press.

A. Einstein. Relativity: the Special and the General Theory. Crown, New York, 1961.

M. J. Greenberg. Euclidean and Non-Euclidean Geometries. Freeman and Company, New York, 1980.

I. Lakatos. Falsification and the methodology of scientific research programs. In I. Lakatos and A. Musgrave (eds.). Criticism and the Growth of Knowledge. Cambridge University Press, Cambridge, 1970, pp. 91-195.

L. Magnani. Epistémologie de l’invention scientifique. Communication & Cognition, 21:273-291, 1988.

L. Magnani. Abductive reasoning: philosophical and educational perspectives in medicine. In D. A. Evans and V. L. Patel (eds.). Advanced Models of Cognition for Medical Training and Practice. Springer, Berlin, 1992, pp. 21-41.

L. Magnani. Creative abduction and hypothesis withdrawal in science. In J. Meheus (ed.). Proceedings of International Congress on Discovery and Creativity (ICDC). Kluwer, Dordrecht, 1999a, in press.

L. Magnani. Visual abduction in mathematical discovery, Technical Report, Department of Philosophy, University of Pavia, 1999b.

L. Magnani, S. Civita and G. Previde Massara. Visual cognition and cognitive modeling. In V. Cantoni (ed.). Human and Machine Vision: Analogies and Divergences. Plenum Press, New York, pp. 229-243.

L. Magnani, N. J. Nersessian and P. Thagard (eds.). Model-based Reasoning in Scientific Discovery. Plenum Press, New York, in press.

J. Meheus. Model-based reasoning in creative processes. In L. Magnani et al. (eds.), in press.

N. J. Nersessian. Kuhn and the cognitive revolution. Configurations, 6:87-120, 1998.

N. J. Nersessian, T. W. Griffith and A. Goel. Constructive modeling in scientific discovery. Technical Report, Georgia Institute of Technology, Atlanta, GA, 1997.

W. V. O. Quine. Philosophy of Logic. Prentice-Hall, Englewood Cliffs, NJ, 1979.

M. Ramoni, M. Stefanelli, L. Magnani and G. Barosi. An epistemological framework for medical knowledge-based systems. IEEE Transactions on Systems, Man, and Cybernetics, 22(6):1361-1375, 1992.

P. Thagard. Conceptual Revolutions. Princeton University Press, Princeton, 1992.

P. Thagard and C. P. Shelley. Abductive reasoning: logic, visual thinking, and coherence. In M. L. Dalla Chiara, K. Doets, D. Mundici, J. van Benthem (eds.). Logic and Scientific Methods. Kluwer, Dordrecht, 1997, pp. 413-427.

P. Thagard and K. Verbeurgt. Coherence as constraint satisfaction. Cognitive Science, 22(1):1-24, 1998.

J. Zytkow. Creating a discoverer: autonomous knowledge seeking agent. In J. Zytkow, Machine Discovery. Kluwer, Dordrecht, 1997, pp. 253-283. Reprinted from Foundations of Science, 2:253-283, 1995/96.

J. Zytkow and P. Fischer. Incremental discovery of hidden structure: applications in theory of elementary particles. In Proceedings of AAAI-96. AAAI-Press, 1996, pp. 150-156.