-
7. Foundationalism versus Coherentism
- State University of New York Press
- Chapter
- Additional Information
113 Chapter 7 Foundationalism versus Coherentism SYNOPSIS • The most prominent and historically most influential model of cognitive systematization is the Euclidean model of a linear, deductive exfoliation from basic axioms. • Such an approach leads to foundationalism. • But the network model of cyclic systematization affords a prime alternative to this traditional axiomatic approach. • These two different models of cognitive systematization give rise to two rival and substantially divergent epistemic programs for the authentication of knowledge, namely foundationalism and coherentism. • The inherent difficulties and limitations of the foundationalist program indicate the advisability of a closer look at the coherentist approach. HIERARCHICAL SYSTEMIZATION: THE EUCLIDEAN MODEL OF KNOWLEDGE The model of knowledge canonized by Aristotle in the Posterior Analytics saw Euclidean geometry as the most fitting pattern for the organization of anything deserving the name of a science (to put it anachronistically, since Euclid himself postdates Aristotle). Such a conception of knowledge in terms of the geometric paradigm views the organization of knowledge in the following terms. Certain theses are to be basic or foundational: like the axioms of geometry, they function as used for the justification of other theses without themselves needing or receiving any intrasystematic justification. Apart from these fundamental postulates, however, every other thesis of the system is to receive justification of a rather definite sort. For every nonbasic thesis is to receive its justification along an essentially linear route of derivation or inference from the basic (axiomatic, unjustified) thesis. There is a step-by-step recursive process, first of establishing certain theses by immediate derivation from the basic theses, and then of establishing further theses by sequential derivation from already established theses. In the setting of the Euclidean model every (nonbasic) established thesis is extracted from certain basic theses by a linear chain of sequential inferences. On this approach to cognitive systematization, one would, with J. H. Lambert , construe such a system on analogy with a building whose stones are laid, tier by successive tier, on the ultimate support of a secure foundation.1 Accordingly , the whole body of knowledge obtains the layered makeup reminiscent of geological stratification: a bedrock of basic theses surmounted by layer after layer of derived theses, some closer and some further removed from the bedrock, depending on the length of the (smallest) chain of derivation that links this thesis to the basic ones. In this way, the system receives what may be characterized as its foundationalist aspect: it is analogous to a brick wall, with a solid foundation supporting layer after successive layer. A prominent role must inevitably be allocated here to the idea of “relative fundamentality” in the systematic order—and hence also in the explanatory order of things the systematization reflects.2 In the setting of this Euclidean model of cognitive systematization, as we shall call it, every (nonbasic) established thesis is ultimately connected to certain basic theses by a linear chain of sequential inferences.These axiomatic theses are the foundation on which rests the apex of the vast inverted pyramid that represents the total body of knowledge. With virtual unanimity, the earlier writers on cognitive systems construed the idea in terms of such a linear development from ultimate premisses (or “first principles”) that are basic both in fundamentality and in intelligibility, so that the order of exposition (or understanding) and the order of proof (or presupposition ) run parallel.3 The axiomatic development of our knowledge is seen in terms of both a deepening and a confirming of our knowledge, subject to the principle that clarification parallels rational grounding so that explanation replicates derivation.4 It does not matter for the fundamental structure of this Euclidean mode of systematization whether the inferential processes of derivation are deductive, and necessitarian, or somehow “inductive” and less stringently compelling. In this regard the label “Euclidean Model” is somewhat misleading. Nothing fundamental is altered by permitting the steps of derivative justification to proceed by means of probabilistic or plausibilistic nondeductive inferences.We are still left with the same fundamental pattern of systematization: a “starter set” of basic theses that provide the ultimate foundation for erecting the whole cognitive structure on 114 Rational Inquiry and the Quest for Truth [3.83.81.42] Project MUSE (2024-03-28 21:54 GMT) them by the successive accretion of inferential steps. And so while modern epistemologists generally depart from a traditional Euclideanism in admitting nondeductive (e.g., probabilistic) arguments—abandoning the idea that the only available means of linking conclusions inferentially to premises is by means of steps that...