Ron Choong
28 min readMar 19, 2018

--

Perceptual Knowledge and Taciticity

The study of epistemology usually seeks to either justify ‘how’ knowledge is acquired or ‘what’ constitutes knowledge. We shall consider Pollock and Cruz’s epistemology of direct realism and its capacity to account for the inexplicable phenomena in scientific discoveries where seemingly accidental and perhaps incomprehensible procedures lead to discoveries which later finds attachment to explanatory theories by way of intuitive and insightful cognitive exercises.

We shall consider the model of direct realism and specifically inference from perception with its denial of a stage known as ‘beliefs about perceptions’ as a description of how scientific knowledge grows.

We ask whether direct realism and inference from perception adequately explain the growth of knowledge and whether the Polanyian concept of tacit knowledge offers an explanation for unexpected scientific discoveries.

Pollock and Cruz argue for a notion in which the rational person seldom move from ‘perception’ to ‘beliefs about perceptions’ before forming ‘beliefs which constitute knowledge’. This is what Polanyi seeks to articulate in his concept of tacit knowledge as a legitimate source. I shall offer insights into the workings of subconscious human cognition in an attempt to construct a more robust epistemology for our understanding of perceptual knowledge.

Introduction

Prior to the arrival of the “Gettier problem”, interest in epistemology was largely in epistemic justification (how we know). Following the alarming exposure that the definition of knowledge as ‘justified true belief’ was inadequate, much attention has refocused onto the analysis of knowledge as content (what we know). Epistemology is now interested in both whether what we know is correct (truth) as well as how we acquire that which we think we know (justification) to be legitimate parcels of inquiry.

This paper seeks to examine one such proposal from John Pollock and Joseph Cruz, and considers its promise for an explanation of certain chance scientific discoveries and the growth of scientific knowledge. Pollock and Cruz assert that epistemic norms are necessary a posteriori truths detected by competent human cognition where beliefs are formed directly from the impact of perception.

The central thesis of this paper claims that the direct realist epistemology of Pollock and Cruz does not adequately explain chance discoveries in science and its contribution to the growth of scientific knowledge. Further, that a Polanyian idea of tacit knowledge may strengthen the claim of direct realism to account for the what and the how of knowledge. What is needed is the ability to form judgments from insights[1] and an explanation of how scientists secure knowledge. The significance of this project is to expand on the explanatory power of direct realism to account for seemingly non-cognitive[2] chance discoveries in science. The limitations of this paper precludes taking up their challenge to offer a low-level epistemological theory such as the Oscar Project in order to verify any epistemological claim. This regrettable situation should not diminish the salient arguments forwarded.

Direct Realism a la Pollock and Cruz

Direct realism holds that one is directly aware of objects in the environment. In normal cases, it regards perceptual experiences to be extramental. Unlike sense-datum, phenomenal quality and adverbial theorists, what you see directly is really what is out there (WYSIWYG). Yet this does not preclude the situations when what we perceive may deceive us, for example, seeing a cardboard cow from a distance or imagining a cloud formation to be a human face. Thus, misleading appearances are consistent with this fallibilistic theory. It has the advantage of deriving knowledge directly from experience since there is no filter which may serve to distort reality. Pollock and Cruz argue that a decent high-level epistemic theory should find adequate expression in mid-level as well as low-level epistemologies. In their effort at constructing a viable Oscar Project to create an ‘artilect’, they present proof-positive of a working hypothesis for their claim that direct realism is the most successful theory of knowledge in a scientific age[3] . We shall dispense with the details of the project but assume that it satisfies their own criteria and not offer a similar low-level epistemological alternative.

What we shall focus on in this paper is the ability of direct realism to account for the inexplicable (in rational terms) phenomena in scientific discoveries, whereby seemingly accidental and perhaps incomprehensible procedures lead to discoveries which later finds attachment to explanatory theories by way of intuitive and insightful cognitive exercises. These may include discoveries of the benzene ring (C6H6) by Auguste Kekule, X-rays by Wilhelm Roentgen, penicillin by Alexander Fleming and of course, Albert Einstein’s several intuitions which led to successful theories. In each case, perception per se was not the instrument by which belief-formation transpired. Neither were beliefs in perceptions the reasons their beliefs were formed. Rather, strong intuitive notions, perhaps founded upon tacit knowledge, coupled with a habit of judgment-formation, led to testable and falsifiable hypotheses. This is less a rejection of direct realism than an attempt to buttress a theory of knowledge with non-traditional cognitive musings from philosophy of science.

What is knowledge and how is it acquired?

Pollock and Cruz sets out a thesis which describes the nature and content of epistemic norms in defense of their thesis that direct realism is the most appropriate manner to describe how knowledge is acquired.

They reject the ‘intellectualist model’ of epistemic norms as infinitely regressive. In each epistemic procedure, the mind will have to go through the motion of making an explicit appeal to epistemic norms in order to acquire justified beliefs, but making such appeals require a prior acquisition of justified beliefs about how to apply to a particular case ad infinitum . Rather than governing in a metaphysical sense, epistemic norms are grounded in psychology and are descriptive of procedural knowledge for cognition. They are relevant in a negative way in that we often criticize our own reasoning to discover faults. But more than this, Pollock and Cruz asserts that epistemic norms play a positive role in guiding our epistemic behavior at the time of occurrence. For them, epistemic norms regulate one’s accumulation of knowledge. Yet the mechanics of their function is admittedly problematic. They conclude that implicit thought must drive the process. Further, in appealing to all internal states, not just beliefs, these norms may be corrected or refuted. In their model, memory and competence guides the knower almost subconsciously (my words)[4]. In this scenario, norms can govern one’s behavior without explicit cognition. In bypassing the intellectualist model, they subscribe to a tacit model of knowing perhaps best articulated by Michael Polanyi. The nature of epistemic norms include necessary but a posteriori truths (constitutive of the concepts whose employment they govern) discovered by detection. As for content, epistemic norms comprise a competence theory of human cognition and constitute a form of direct realism.

In their framework of epistemological high-level theories, five of them fall into two groupings. Foundationalism and coherentism are doxastic while reliabilism, probabilism and direct realism are non-doxastic. However, since foundationalism, coherentism and direct realism are internalistic while reliabilism and probabilism are externalistic, direct realism is the only internalistic, non-doxastic epistemology. This is the theory they seek to champion, following their commitment to an internalistic tradition while rejecting a doxastic approach, and rejecting externalist theories[5] (since internalizable norms must necessarily be internalist and doxastic theories)[6] because they cannot accommodate perceptual knowledge and memory[7] .

The argument for perceptual knowledge in an epistemology of direct realism is as follows. Perception and memory are cognitive processes by which beliefs are formed, their justification determined by other beliefs. The existence of these other beliefs denies a doxastic approach to epistemology. Non-doxastic approaches may be either internalist or externalist. By arguing against externalist theories, Pollock and Cruz certifies an internalistic non-doxastic theory as the most plausible for epistemology.

Pollock and Cruz’s competence theory of human cognition posits the argument that beliefs acquired from perception are neither justified by other beliefs (contra foundationalism) nor self-justified, but derive their justification from the precept provided by the perception. Beliefs about precepts are unnecessary and counter-productive. Thus, the knower adopts justified beliefs directly and indirectly. Direct justifications of immediate surroundings are licensed by precepts per se while indirect justifications are licensed by defeasible inferences which are themselves licensed by epistemic norms, leading back to the power of detection. These inferences are acquired through a process of defeasible reasoning, allowing for the possibility of replacing current beliefs with better ones following falsification treatments. In this way, it is not unlike Gilbert Harman’s idea of negative coherence (reference) in which no beliefs are immune to change and therefore, there are no (permanently) conclusive beliefs. This is likened to Karl Popper’s concept of conjectures and refutations in the growth of scientific knowledge.

William Alston calls the claim that perceptual beliefs can be justified by experience, say visual experience, naive direct realism[8] . However, unlike Pollock and Cruz, he maintains that there exists hidden doxastic conditions needed to offer adequate support for direct realism. Alston argues that certain assumptions hold for direct realism to exist, such as, that a sufficient ultimate basis exists, whether of a foundational sort or not, and that the perceptual apparatus is in normal working condition. Finally, even if we are not certain that it is working normally, a further presumption based on experience holds. This is the presumption that in the absence of sufficient reasons to suspect abnormalities, the presumption of regularity suffices to justify a commitment to direct realism[9] . It is this idea of hidden assumptions that holds the promise of modifying Pollock and Cruz’s model to accommodate an Polanyian taciticity. If Alston is correct, and direct realism can never be free from these hidden doxastic presumptions, then we have to reconsider the easy rejection of both foundationalism and coherentism. But how can the non-doxastic direct realism co-exist with the doxastic assumptions which Alston argues is needed for a working theory of realism?

As far back as the fourteenth century, William of Ockham (1285–1347) began to posit a direct realism which involved a two-tier cognitive theory. His was a modification of John Duns Scotus (1266–1308), whose intuitive and abstractive cognitions were said to supply the human mind with reliable knowledge. While Ockham rejected Scotus’ use of intermediate species in medio to explain how individual cognitions are possible without the existence of ontological universals, he acknowledged a non-ontological ‘universal’ which conferred signification only. Ockham then argued that in the natural course of events, perception and acquisition of reliable, scientific knowledge is possible in the absence of divine intervention. He based this on four principles: (i) the human mind is equipped with an innate apparatus of rationality for cognition, (ii) the human mind usually works as intended, (iii) knowing is an active exercise and volition is a necessary condition, and (iv) certainty of knowledge is defined as freedom from actual rather than potential doubt and error. In Ockham, we find a hybrid direct realism which dispenses with the Aristotelian concept of species in medio transferring information through the medium and instead, relies on the direct effect of cognition onto the subject (percipient)’s intuitive cognitive faculty. An encounter with an object triggers within the mind, the generation of mental concepts (his non-ontological universals which can be stored as habits for future reference) which identifies and classifies. Apprehension takes place when the intuitive cognition and abstractive cognition together form mental propositions, which along with inferences drawn from past experiences, give rise to an assent, which is judgment about the truth or falsity of the proposition.

Thus I first see an animal thing. My mind generates the cat-species mental concept. In my past experiences, I remember by habits of furry animals called cats. While the intuitive cognition works on identifying the existential component of the cognition, the abstractive cognition tells me about the attributes of cats, abstracted from the object cat itself. Together, I form the proposition that I believe have just seen a cat. This apprehension can either be true or false (illusion, delusion, deception, imagination, hallucination, etc.). It is now the role of judgment to determine if the proposition is true or false, and assent to it as such.

If Ockham’s theory is foundational or coherentist, it is so because it needed to be. Does having doxastic elements in a theory rob it is the claim to be a realist theory? I think not, for there is no successful realist theory which can adequately account for any justification to make the assumption as to regularity a necessary conditions in the light of exceptions to the rule. It is one thing to say that things always work naturally because they do and another to say, things always work naturally because they mostly do. No one can say the former. Things do go wrong, and we cannot explain why they do if we do not allow for foundationalist presumptions.

This probabilistic component stretches the meaning of non-doxastic because it is descriptive rather than explanatory. To say that reality is observed to be probabilistic is an inadequate account of why probabilism does not incorporate some level of belief in the regularity of events. It is not always clear why our expectation of regularity of universal laws do not constitute a belief function, making it in fact, doxastic. The argument that true doxastic epistemologies are buttressed by beliefs (with or without warrants) with an telos is no solution. It just means that say, non-religious beliefs in future events are merely non-directionally or ateleologically doxastic, with no creed to specifically account for it, but it is nevertheless doxastic in that it demands an assent that a judgment of a future event to be true or expectedly so.

Explaining successful chance discoveries.

Chance discoveries are successful only if the initial cognitions bear fruit in the mind. The question that bears asking is, what is it that permits fragments of data to cohere in the mind and assume a meaningful shape of useful knowledge? What makes accidentally derived data a successful scientific theory or knowledge? Karl Popper offers a mode of understanding how knowledge grows by positing the conjectural function of inquiry. We know by making guesses and by a process of trial and error, weed out unsuccessful hypotheses, much like the evolutionary death of life-forms unfit for survival in a changing environment. In his evolutionary epistemology, Popper seeks to formulate a theory that knowledge, though never certain, grows through a manner by which even accidentally derived data can be useful in the growth of knowledge.

Karl Popper

Popper rejected of the inductive method in the empirical sciences and argued that hypotheses are deductively validated by what he called the “falsifiability criterion.” Under this method, a scientist seeks to discover an observed exception to his postulated rule. The absence of contradictory evidence thereby becomes tentative and provisional corroboration of his theory. His epistemology may be summed up in the title of his book, Conjectures and Refutations. which distinguished between the discovery of scientific hypotheses and their justification or validation[10] . The manner in which our knowledge grows is by thinking up plausible explanations of hitherto unexplained phenomena or possible solutions to problems, and then testing to see if they fit or work. To Popper, the logical analysis of scientific knowledge is interested not in the conception of the theory but in the justification[11] . We therefore subject them to critical examination and see if others can find fault with them. By devising experiments or observations, flaws may be exposed for correction. The starting point for Popper is the need to solve problems. We use our understanding of the problem with the powers of our imagination and insight[12] to come up with possible solutions. These possible solutions are theories called conjectures, which may be true or false. The final step is to devise tests which must constitute possible refutations, with the capability of falsifying the theories. These tests are of observation and experimentation along with critical discussion[13].

Popper’s concepts of the scientific method of gaining knowledge is based on falsification and verisimilitude. Our best solutions give rise to new problems. Thus “our knowledge grows as we proceed from old problems to new problems by means of conjectures and refutations”[14] .

Falsification, which leads to the refutation of conjectures, is in part or in whole the fate of all hypotheses. The status of a scientific hypothesis is the effectiveness by which it survives rigorous experimental testing and explains beyond present knowledge with a scope for prediction. One of the limitations of the principle of falsifiability is that fallible premises cannot justify or prove the conclusions which follow from them, even if these conclusions follow validly. Hence it is limited to a critical role with no justificatory power[15] . He described the de facto scientific method as one of falsification of hypotheses. Instead of the biological struggle to survive by letting weaker animals die, scientists try to eliminate false theories and let them die in their stead. This principle of falsification replaces the principle of verification. Although his critics object by saying that as no amount of individual verifications can conclusively verify a general statement, so no amount of falsifying instances can conclusively falsify, he answers that a logical asymmetry holds, that all we need is one event of falsification to falsify. In the logic of falsification, conclusiveness is a red herring.

To account for a growth of knowledge which cannot be objectified or quantified in the classic sense, Popper introduced his concept of verisimilitude, whereby successive theories enable advances in approaching although never wittingly reaching the truth, since no statement can be deemed true or untrue with certainty. It combines the notions of truth and logical consequences; every false statement possesses true logical consequences, so that a false theory may be nearer to the truth than another false theory by virtue of its ‘truth-content’. However, not all false theories are equal[16]. All theories must be presumed to be false since we can never confirm absolute truthfulness with certainty. Thus a scientifically (falsified) false theory may be better off than a scientifically true (insofar as we fail to falsify it) due to its higher ‘truth-likeness content’, its verisimilitude.

Popper on Hume: The solution to the problem of induction-evolutionary epistemology

Popper’s theory of science with reference to knowledge is centered on solving two problems; the problem of induction and the problem of demarcation. The former may be restated as ‘What is the relationship between knowledge and experience?’ and the latter ‘What distinguishes science from metaphysics, logic and mathematics?’ The received wisdom of induction was that knowledge is obtained experientially by means of induction, that is, the inference of universal general statements or theories from accumulations of particular facts. Although it was Hume who demonstrated that this understanding was philosophically invalid, (the problem of induction), Popper revived this critique with his own revision. For Popper, the problem was twofold, a logical and a psychological one. ‘Are we rationally justified in reasoning from repeated instances of which we have had experience to instances of which we have had no experience?’ No! is the logical answer. ‘Then why do people continue to believe so?’ ‘Custom or habit, an irrational but irresistible power of the law of association, conditioned by repetition’, was Hume’s answer to the psychological question. For Hume, this proved the irrationality of human scientific knowledge[17] . There was a crisis of rationality in science!

Popper disagrees. The logical answer may be the result of people having preferences for certain of the competing conjectures, the result of selection, the struggle for survival of the hypotheses under the strain of criticism, which is artificially intensified selection pressure. The psychological decision to believe inductively is due to the fact that men frequently outlive their beliefs; but for as long as the beliefs survive, they form the basis of action. For Popper, this Darwinian procedure of selection of beliefs and actions is not irrational, but rather, a transference of the logical solution to the psychological realm[18]. By appropriating Darwin, Popper answered Hume while trying to save scientific rationality and the phenomenon of knowledge from humiliation. It is difficult to see how acting on beliefs and preferring certain conjectures are not irrational acts.

Popper observed that our knowledge is rarely entirely without authority, i.e., we presume upon the authority of others in receiving knowledge, from the reading of newspapers to the reading of historical books, which are based on other historical documents ad infinitum. This led him to infer that ‘all observation involves interpretation in the light of out theoretical knowledge’[19] . When we are doubtful about the truth of an assertion, rather than seek its source, we tend to test it by seeking independent corroboration. Hence, corroboration of testimonial experience confirms our hypothesis rather that the other way around. This led to his suspicion that scientific growth is indeed theory-led.

In The Logic of Scientific Discovery, Popper maintained that our knowledge grows when we accept statements describing experience which contradict and hence refute our prior hypotheses. For Popper then, there is no such thing as a logic of scientific discovery-only a logic of scientific testing. A deductive rather than an inductive relationship holds between experience and knowledge. Rather than being our sole source of knowledge, experience corrects our theoretical assumptions of knowledge. Since absolute knowledge can neither be obtained nor identified even if obtained (unwittingly), we can never affirm our attainment of absolute knowledge. This leaves us with better approximations to the truth and a reduction in the margins of error. Yet experience alone will never be sufficient to provide us absolute knowledge. Even our direct experience depends on our capacity to infer knowledge from data and newer inferences from later experiences might correct older inferences. Here, we see direct realism at work.

Where then do hypotheses come from, if not inductively from experience? In reply, Popper argued that hypotheses do not come only from observations by also from our propensity to guess. Popper accepted that Hume was correct in supposing that our theories cannot validly be inferred from what we know to be true but was wrong when he concluded that therefore, our scientific knowledge is irrational. Thus, in summary, Popper says that when we do not know, we tend to guess, acting on these beliefs and often make preferential choices.

Popper on Darwin: Popper’s Evolutionary Epistemology[20]

Popper advocated an evolutionary epistemology; an attempt to explain the existence of a truth-seeking science within the framework of natural selection, giving a biological twist to Kant’s problem, ‘How is knowledge possible?’. He does this by treating both endosomatic as well as exosomatic adaptations as forms of knowledge[21] . Evolutionary epistemological theories recognize three levels of adaptation; genetic, behavioral learning and scientific discovery. On all levels, we operate with inherited structures (genes, the innate repertoire of behavior and scientific theories) which are passed on by instruction. New structures arise by trial changes from within the structure by tentative trials subject to natural selection or the elimination of error. From the level of scientific discovery emerge two aspects, the capacity for language and thus the implications of publication (oral or documentary), leading to objects external to ourselves and now the objects of criticism and the creative imagination which results from the novelties of human language. Popper argues that this development contributed to the nature of the growth of knowledge by introducing the critical analysis of knowledge and the creative imagination to posit hypotheses independent of experiences. Progress may be gauged by comparing new problems with older ones.

One of the features of Popperian evolutionary epistemology is the position that while knowledge may be psychologically or genetically a priori, it may not necessarily be valid a priori. Our expectations may indeed be mistaken. Popper suggests that we are born with a propensity to expect regularity. While a newborn may be genetically engineered to expect feeding, it may in fact be abandoned, and starved.

In Popper’s own summary of his theory of scientific method, he wrote that

‘ … knowledge progresses… by unjustified (and unjustifiable) anticipations,

by guesses, by tentative solutions to our problems, by conjectures’[22] .

These ‘unjustifiable anticipations’ are for Popper the grounds of scientific insights. It is curious that he was willing to let this idea remain unanalyzed. How do these anticipations come about? What is the genesis of these guesses, which in the world of scientific discovery, have been immensely successful. It may well be that these insights and its judgments are nothing more than epistemic norms functioning ‘invisibly’. Such invisible insights or epistemic norm functions seem to have been anticipated by Michael Polanyi in his treatment of personal knowledge, in which he introduces what he calls tacit knowledge.

Michael Polanyi

Polanyi‘s epistemology stems from his sharp critique of the positivistic philosophy of science. He proposed a concept of personal conviction and commitment being part of the scientific enterprise, affecting the way we know. From this he formulated the idea of tacit knowledge. He believed that the ‘scientific’ account of knowledge as a fully explicit body of statements did not allow for an adequate account of discovery and growth. In his account of tacit knowledge (a form of implicit knowledge we rely on for both learning and acting), a subjective dimension exists. The phrase, ‘We know much more than we can tell’[23] is characteristic of his philosophy. Polanyi defined his understanding of ‘meaning’, acknowledging the imaginative powers of creative faculties. This opens the door for religious reflection into the playbox of knowledge, redrawing the parameters of ‘meaning’.

At each stage of scientific inquiry, from discovery to confirmation, a crucial role is played by experience, skill and expertise, each of which involves a ‘knowing’ on the part of the practitioner which is implicit.

For him, knowledge is a tacit personal integration of subsidiary clues into a focal whole. He rejects the demand for impersonal, wholly objective and fully explicit knowledge, hence the terms personal and tacit knowledge in his ‘post-critical’ philosophy. He wanted to free scientific research from the pretense of objectivity and expose the use of imagination and even myth-making in scientific reflection.

The distinction between tacit and explicit knowledge is grounded in his distinction between subsidiary and focal awareness. This theory draws on Gestalt psychology and its account of perceptual integration whereby we are perceptually aware of the coherent whole to which we are attending on the basis of a subsidiary awareness of details or clues to which we are not attending. Thus knowledge involves a tacit integration of subsidiary clues into a focal whole, expresses in the human mind as possessing a double intentionality. We attend from the subsidiary clues to the focal whole for e.g., a man with a walking stick uses the impressions of the stick on his palm as subsidiary clues to create focal awareness of the path and any obstacles in his way, or the way we use words to attend to the focal awareness of meanings of the words. If we ‘lose our focus’ and concentrate on the clues instead, we tend to forget the meaning, e.g., if we rapidly repeat the sounds of any word like a mantra, we soon lose awareness of the meaning attached to the word. This subsidiary-focal complex denies objectivity but does not import subjectivity. Rather than replacing objective knowledge with subjective knowledge, Polanyi calls this, ‘personal knowledge’.

What is this subsidiary awareness? It is more than merely the subconscious or fringe awareness. In fact, what starts as the focal awareness can be reduced to subsidiary awareness as the focal whole changes. In learning to play the guitar, the motions of the fingers are the focal whole, but soon the fingering of the strings become subsidiary and focus is on the individual sounds of each pluck and finally these notes become subsidiary to the whole musical score. These integrations pervade not just our perceptions but all of human knowledge. Language is controlled by this process and hence, can never be precise. Human knowledge as such, can never be wholly explicit and articulate, nor subject to successful complete critical scrutiny since it depends upon the subsidiary clues of awareness.

All knowledge rests upon the tacit and therefore acritical foundations of personal judgment and commitment. In a strange way, this philosophy suggests that ultimate reality is not so much unobtainable, as in Popper’s concept, but rather, inexplicable. The primary tools we use for knowing are our bodies, from which we attend from in the enterprise of knowing. He calls it our indwelling of our bodies and by extension, we also indwell the objects of knowledge, by incorporating our acquired knowledge, ideas and frameworks into our minds. This indwelling of tacit integration bridges the Cartesian gap between the self and the external world.

Polanyi’s epistemology enables the explicit possession of beliefs which we know may turn out to be false. It is reminiscent of both direct realism as well as Popper’s contingent and ‘objective knowledge’ in the sense of possessing the best approximation to the truth, knowing not only that it may well be false, but that we can never know it to be true. Polanyi is more optimistic than Popper regarding our ability to recognize ultimate truth. From the idea that inexplicable beliefs may be held until proven wrong[24], Polanyi invites us to accept articulated ultimate beliefs which are presupposed by our proximate beliefs and practices. These proximate beliefs buttress the ultimate ones. He contends that the practice of scientific research already follows such a methodology. Scientific premises are embodied and tacitly known in the practice of scientific research. From results already accepted as true and methods held as valid, we articulate principles and beliefs which these premises presuppose. Thus, proximate beliefs provisionally justify ultimate ones[25] .

Interested in the way scientists are influenced by their prior personal knowledge, Polanyi considered the issue of neo-Darwinism in his post-critical or fiduciary philosophy. He argued that the scientists’ personal participation in his knowledge, in both discovery and validation, is an indispensable part of science itself. This refers to personal factors of belief, passionate engagement, imagination and authority. Polanyi wished to substitute the concept of an objective, impersonal ideal of scientific detachment with one that gives proper attention to the personal involvement of the knower in all acts of understanding. This theory is not without its problems. Elsewhere, Polanyi described tacit knowing as involving a skillful act, yet he does not account for the person without the apparent requisite skill of acting upon the event of knowing. For e.g., can a mentally retarded man skillfully know? Perhaps we need to redefine the idea to account for say, a madman who operates under a different rationality from most of us. Polanyi’s aim was to offer the restoration of science into an integrated culture of knowledge.

He posits what he calls the tacit knowledge hidden behind explicit knowledge and provided examples of scientific investigations prompted not by rationalistic progress but by hitherto undiscovered ‘rationalisms’, the most famous being the ‘insights’ of Einstein. Polanyi’s philosophy of science argues that only the tacit sense of the scientist for what is significant, probable and discoverable given the means available leads to successful research, to wit, the number of scientific ‘discoveries’ which came about accidentally or unwittingly.

Polanyi described a bipolar approach to knowledge. His irreducible relational unity consists of tacit as well as explicit knowledge, which together, informs the human learning process, whether or not we realize it. His idea of ‘personal knowledge’ states that discovery is really the result of the knower indwelling the object to be known with a fiduciary passion! There is indeed more to know than we think, and we do know more than we can tell! We live in a contingent universe of a bipolar relational unity, by which reality is not perceived through observation and discovery in a detached atmosphere, but by observer-conditioned circumstances. All knowledge is transmitted and received both tacitly and explicitly so that by such means of learning, we can transform mere scientia to sapientia, and may engage the full revelation of the ultimate reality. For Polanyi, scientific discovery is the discovery of hidden orders of reality.

Here, we note the remarkable overlap between direct realism’s concept of an implicit epistemic norm guiding epistemic behavior. By appropriating a Polanyian tacit dimension, we can begin to comprehend how is it possible for the formation of human habits, subconscious patterns of memory and observer-conditioned circumstances to effect normative functions.

Polanyi’s concept of tacit knowledge serves as an explanation of Einstein’s remarkable non-experimental insights. His contention that proximate knowledge, legitimately enabling us to hold onto ultimate beliefs (which we know may turn out to be false) comports with direct realism’s idea that beliefs formed may be erroneous and stands under correction when faced with better data.

A close examination of the nature of scientific knowledge has shown that underlying knowledge/truth-claims are judgments and convictions made in the absence of conclusive evidence by employing the power of the imagination to see the truth in mythic discourse, to disclose hitherto unknown hidden realities. Such discovery of knowledge involves the tacit use of personal knowing and post-linguistic apprehensions within the act of anticipating the disclosure of hidden realities.

Conjectures

The conjecture of this paper rests largely upon an assessment of the epistemological functions of Popper and Polanyi. In each case, they were attempting to account for the growth of scientific knowledge. Direct realism is a hybrid in the sense of regarding scientific knowledge and perception as ‘real’ and yet not confining itself to explicit accounts of knowledge. This dual identity allows for conjectures regarding what exactly this implicit guidance amounts to. Perhaps the epistemic norm functions Pollock and Cruz discusses are the unjustifiable anticipations which Popper articulates, and which Polanyi calls tacit knowledge. What is the evidence or warrant for this claim? One of analogy and mapping of functions.

This proposal is embryonic at best but finds support in the descriptive nature of Pollock and Cruz’s propositions. It is legitimate to refrain from a predictive function at this stage and focus on actual situations. We know that the mind functions epistemically and according to Pollock and Cruz, epistemic norms do guide epistemic behavior. However, the process is not apparent and can only be accounted for ex post facto. The success of direct realism seems to hinge upon its assertion that knowledge-forming beliefs can be obtained directly from percepts. Yet, what if the percepts do not offer recognizable data? What if correct discoveries are obtained from data accidentally observed? Does this still mean that direct perception in the sense that knowledge (presumably true) can be built from it? It appears that direct realism cannot allow for this. To argue that knowledge-forming beliefs can be acquired directly from percepts must necessarily assume that the percepts are correctly interpreted, or we risk getting into a Gettier-type conundrum. It may result in mistakenly or coincidentally securing true justified belief which does not count as knowledge.

This then is the significance of this paper’s proposal. There is as yet, no successful explanation of why seemingly inexplicable discoveries in scientific research occur despite its empirical mode of existence. A failure of philosophy to account for such occurrences exposes epistemology to the constant invasion of a priori knowledge and claims of miracles which demand the suspension of universal laws. If Pollock and Cruz is correct in eliminating all other theories of knowledge, then direct realism must account for such anomalies or risk being consigned to a boutique epistemology.

Is tacit knowledge an intrusion into direct realism? Not so. When seemingly accidental or fortuitous discoveries in science are made, it masks a process which Polanyi calls the acquisition of tacit knowledge. The explanation is already embedded in Pollock and Cruz’s theory. It exists precisely where they explain how epistemic norms work. If their articulation of epistemic norms can be expanded to explain chance scientific discoveries, it will certainly make for a more robust theory of knowledge and counter any mythical claim of ‘mystery’ and ‘miracles’ in the process of reality. Wait a minute. Isn’t tacit knowledge also mythical sounding? Not necessarily. Taciticity refers to an unreadily-classifiable category but does not necessarily imply suspension of natural laws.

Does taciticity impose a filter on realism, making it no longer direct? I think not. What tacit knowledge describes is not so much a third party entity but rather a percept in disguise. It is really knowledge but does not come as a normal percept, whether visual, tactile, or aural. Rather, it streams into the (sub?)consciousness and mates with the obvious modes of perception. There is much that can be speculated about tacit knowledge but it will be beyond the scope of this paper.

Conclusion

Direct realism as articulated by Pollock and Cruz cannot on its own, explain chance discoveries in science. No Oscar project models can replicate epistemic norm functions in cases where it is not obvious where the link between the discoverer (knower) and the discovery (knowledge) lies. In the lacuna where Pollock and Cruz admits of an explanatory gap, I have suggested a consideration of Popper’s evolutionary epistemology as well as his postulate that unjustifiable anticipations leading to refutations of conjectures are normal expressions of the growth of knowledge as well as Polanyi’s idea of tacit knowledge as the mechanics behind epistemic normativity. For Popper, as scientific knowledge grows, its statements of hypotheses undergo conjecture and refutation, so that by a process of trial and error, the fittest descriptions of reality survive. This is surely a sufficient criteria for the survival of any epistemology.

While direct realism by way of inference from perception is the claim made by Pollock and Cruz, they understand the provisional nature of theory building as an exercise in seeking falsification and thereby maintaining a tentative claim to a theory of knowledge. Like Popper, they argue for a non-inductive perceptual knowledge. Like Polanyi, they conclude that we often think without explicit cognition. Such implicit thinking sounds much like tacit knowing. This may explain discoveries in science that often astound and even confound the ‘discoverers’ and offer the theory of direct realism an opportunity to explain the existence and mechanics of chance discoveries.

This paper sets out to build on direct realism by importing concepts such as unjustified anticipations and tacit knowledge. It is hoped that a more expansive theory may be constructed to accommodate percepts which exist at the margins of the parameters. Indeed, these peripheral percepts may well hold the key to a holistic approach to epistemology with a footprint wide enough to contain all known phenomena.

Selected Bibliography

1. Adams, Marilyn McCord. William Ockham. Vols. 1 & 2. Notre Dame: University of Notre Dame Press, 1987.

2. Alston, William. “Perceptual Knowledge” in The Blackwell Guide to Epistemology. edited by John Greco and Ernest Sosa. Oxford: Blackwell Publishers. 1999.

3. Craig, Edward. (General editor). Routledge Encyclopedia of Philosophy. Vol. 7. London & New York: Routledge. 1998.

4. Duns Scotus. Philosophical Writings. Translated by Allan Wolter. Indianapolis: Hackett Publishing Co., 1987.

5. Harman, Gilbert. Reasoning, Meaning and Mind. Oxford: Clarendon Press. 1999.

6. Harre R. (ed.). Problems of Scientific Revolution. Oxford: Oxford University Press. 1975.

7. Magee, Bryan. Confessions of a Philosopher: A Journey Through Western Philosophy. New York: Random House. 1997.

8. Miller, David. Popper Selections Princeton: Princeton University Press. 1985.

9. Murphy, Nancey and George F.R. Ellis. On the Moral Nature of the Universe: Theology, Cosmology and Ethics. Minneapolis: Fortress Press. 1996.

10. Pasnau, Robert. Theories of Cognition in the Later Middle Ages. Cambridge: Cambridge University Press, 1997.

11. Polanyi, Michael. Personal Knowledge. Chicago: University of Chicago Press. 1962 (1958).

12. _____________. The Tacit Dimension. London: Routledge. 1966.

13. Pollock, John and Joseph Cruz. Contemporary Theories of Knowledge, second edition. Oxford: Rowman & Littlefield. 1999.

14. Popper, Karl R. The Logic of Scientific Discovery. London: Hutchinson. 1977. (1959).

15. _____________, Conjectures and Refutations. London: Routledge. 1976. (1962).

16. Radnitsky, Gerard and W.W. Bartley, III, (eds.). Evolutionary Epistemology, Theory of Rationality and the Sociology of Knowledge. La Salle. IL: Open Court. 1987.

17. Schilpp, P.A. (ed.). The Philosophy of Karl Popper. 2 vols. La Salle: Open Court. 1974.

18. Stenmark, Mikael. Rationality in Science, Religion and Everyday Life. Notre Dame: University of Notre Dame Press. 1995.

19. William Ockham. (1317–1347) Philosophical Writings. Edited & translated by Philotheus Boehner. Revised by Stephen F. Brown. Indianapolis: Hackett Publishing Co., 1990.

[1] Insight are distinguished from judgments as being somewhat less than entrenched positions. While insights provide persuasive commendations, judgments are final pronouncements on contingent propositions.

[2] While a discovery is normally cognitive in that the discoverer is aware of the phenomenon cognized, non-cognitive chance discovery refers to the unintended consequences of accidental discoveries in which the discoverer played no formal part in the process of discovery. Yet not all chance discoveries are non-cognitive. If a scientist discovers A while seeking to obtain B, there is cognition of what process he is engaged in. however, where a scientist had no plan to conduct an experiment, but unbeknownst to him, an accidental process results in a phenomenon which is then discovered, it is a non-cognitive one, in the sense of the status of the discoverer at the time the process took place.

[3] John Pollock and Joseph Cruz, Contemporary Theories of Knowledge, second edition (Oxford: Rowman & Littlefield, 1999), 194

[4] This reminds one of Hume’s suggestion that seemingly smart people continue to believe inductively despite its obvious dangers because we are creatures of habit and become conditioned by repetitive behavior.

[5] reliabilism and probabilism

[6] foundationalism and coherentism

[7] Pollock and Cruz, 122

[8] Alston. “Perceptual Knowledge” 227

[9] Ibid., 228

[10] Karl R. Popper, Conjectures and Refutations (London: Routledge and Kegan Paul, 1976)

[11] Ibid., 31

[12] That which Popper refers to as insight and imagination is echoed in Polanyi, as possible reference to divine revelation, a sapientia as opposed to mere scientia.

[13] Bryan Magee, Confessions of a Philosopher: A Journey Through Western Philosophy (New York: Random House, 1997) 180

[14] P. A. Schilpp, (ed.) The Philosophy of Karl Popper, Book I. (La Salle: Open Court, 1974) 362

[15]Alan Musgrave, ‘Objectivism of Popper’s Epistemology’ in P. A. Schilpp, (ed.) The Philosophy of Karl Popper, Book I. (La Salle: Open Court, 1974) 569

[16] The utility of verisimilitude may be understood in the manner in which Einstein knew that while his theory of General Relativity were superior to Newton’s theory of Gravitation, yet not true. It was a better approximation to the truth.

[17] Karl R. Popper, Conjectures and Refutations Chapter 1

[18] David Miller, Popper Selections (Princeton: Princeton University Press, 1985) 112–3

[19] Popper, Conjectures and Refutations Introduction

[20] See Chapter 6 in R. Harre (ed.), Problems of Scientific Revolution, (Oxford: Oxford University Press, 1975)

[21]Edward Craig, Gen. ed. Routledge Encyclopedia of Philosophy vol. 7 (London & New York: Routledge, 1998) 535

[22] Popper, Conjectures and Refutations Preface, vii

[23] Michael Polanyi, The Tacit Dimension, (London: Routledge, 1966) 4

[24] Sounds like presumptionism: see Mikael Stenmark Rationality in Science, Religion and Everyday Life (Notre Dame: University of Notre Dame Press, 1995)

[25] This resembles Imre Lakatos and Nancey Murphy, see Nancey Murphy and George F.R. Ellis On the Moral Nature of the Universe: Theology, Cosmology and Ethics (Minneapolis: Fortress Press, 1996). However, unlike the hard-core theory, the proximate beliefs are foundational as well as tacitly-impressed, the missing element is our ability to identify it. It seems that Polanyi poses a stronger position than Lakatos, Murphy or Stenmark.

--

--

Ron Choong

I am an interdisciplinary investigator and explorer of science and religion