ABSTRACT:
Although the new discipline of knowledge management has knowledge itself as its focus of analysis and investigation, it nevertheless pays little attention to disciplines that most directly and searchingly examine and inform questions about knowledge, and its origins and nature. This methodological shortcoming is a curious anomaly that hinders the growth of knowledge about knowledge, and hampers potential conceptual developments, and innovations in knowledge management practices. By looking to the most recent developments in epistemology, the philosophy of science, and related disciplines on the rapidly moving edge of the cognitive sciences, a very much richer and more interesting picture of knowledge emerges than that which passes as currency in contemporary discussion and debate. The methodological resources made available by these branches of inquiry therefore have implications for the coherent development and extension of knowledge management as a significant field of intellectual and practical application.
Introduction
The contemporary literature on knowledge management is either replete with questions about the nature of knowledge, and what it is exactly that is to be managed, or it disregards these concerns, content to leave the problem of knowledge as something of a ‘black box’ (Spiegler, 2000, p. 4), or to gloss over the issues involved with cursory treatments of explicit and tacit knowledge differences. Of those accounts that do attempt to furnish some explicit conceptualisation of knowledge, few look to philosophy, which is the discipline that most directly and searchingly examines questions about knowledge, and its origins and nature.
Theories of knowledge are critical, in a self-referential way, to an analysis and assessment of theory in any domain, and particularly in the domain that claims to trade in the theory and practice of managing knowledge itself. This is because what a theory of knowledge, or epistemology, claims as being sufficient or adequate for the justification of knowledge, determines the content and structure that theories can have, whatever their domain of interest. In other words, there is a reciprocal relationship between our understanding of what knowledge is, and our understanding of what knowledge management is, and therefore what it can be in practice.
Although underlying philosophical assumptions may exert a strong influence on the nature of theories, they are often not explicitly acknowledged and discussed in the knowledge management literature, and this can hinder a proper methodological evaluation of the field and hamper potential developments, particularly in areas of concern that are central to practice, such as the nature of tacit knowledge and creativity, and how such know-how might be communicated, leveraged, shared, and transferred effectively for organizational learning, innovation, change, and competitive advantage. Consequently, an account of the central issues, and an overview of recent developments in epistemology and the philosophy of science are provided here as a methodological contribution to the emerging discipline, with the aim of furnishing better explanatory machinery for explicating and justifying concepts and practices of significance to the field. However, due to the scope and complexity of this branch of inquiry, and because of constraints on space, the account given here is necessarily simplified and selective, and restricted to features that are most salient to developments in epistemology and its corresponding relationship to knowledge management. Nevertheless, what emerges about knowledge from this line of inquiry turns out to be much richer, and more complex and interesting, than the relatively uninformed, and overly simplistic and categorical distinctions that pass as currency in contemporary discussion and debate, particularly between forms of knowledge that are explicit or codifiable on the one hand, and implicit or tacit on the other.
I Think, Therefore I Am
The point of departure for our conventional understanding stems from the efforts of the philosopher Descartes to find a basis for knowledge that was incorrigible, indubitable, and infallible. Descartes was concerned with the question of how to determine how true empirical understanding could be acquired and how it could be distinguished from apparent knowledge, or false belief (Descartes, 1968). He concluded that the only thing he could be certain about was his own inner reality, that is, his own thoughts, feelings, doubts, etc. The reality of these things he took to be self-evident. Consequently, he took a subject, an ‘I’, which even if deceived into a false belief by an evil demon, was something that could not itself be doubted to exist. Hence, his claim ‘Cogito ergo sum’ – ‘I think, therefore I am’. This claim to be a thinking thing implied a necessary truth because, he reasoned, even doubting entailed cognition, and therefore a subject that inescapably thinks must itself exist. From this foundation, the means for building knowledge was the rational soul, which was taken to be an innately structured and configured entity that was capable of distinguishing things that were self-evident from those that were not (Churchland, P.S. 1992, p. 243).
It is from Descartes understanding that modern conceptions of knowledge have stemmed. The clearest expression of this classical empiricist view - later to be known as ‘positivism’, and more recently as ‘logical empiricism’ - is the Justified True Belief account of knowledge, which has been adopted explicitly by some influential contemporary knowledge management theorists (see Nonaka 1994; Nonaka and Takeuchi, 1995; Takeuchi, 2001; Nonaka, Toyama, and Konno, 2001). This view can be stated in the following way:
Person x knows that p, where p is some particular claim to knowledge, if and only if:
p is true
x believes that p, and
x is justified in believing that p for the reason q.
However, this formula for justifying knowledge is beset with problems. For example, it creates the problem that q itself must also be a knowledge claim if it is to provide a justification for believing p, and if this is assumed to be the case, then the Justified True Belief account is circular. Furthermore, if q is assumed to be known, then there is the threat of an infinite regress of justifications occurring, as it can only be known by virtue of knowing something else, and so on. The way that this dilemma has usually been resolved is to argue that the Justified True Belief account is a case of derived knowledge, where the chain of implications stops at knowledge that is immediate, and that as with Descartes’ view, all of our derived knowledge rests on some foundation of immediate knowledge, like sense data, first person sensory reports, and observation statements, etc.
Hume’s Problem
It was the philosopher Hume who first recognised another problem with this general framework for justifying knowledge. He wondered how we can ever come to know the world as it really is, if all the mind has access to are sensory effects of the external world. Immediately experienced sense data are not themselves the objects of the external world, but are instead only phenomena. Hume concluded that if it is impossible to know reality except through the mediation of our senses, then the justification of our beliefs about the external world may be impossible, for if sense data can never be directly compared with the external world, we can never know how accurate our sensory experiences are (Churchland, P.S., 1992, pp. 246-247).
Hume’s problem therefore relates to the question of justifying the move from experience to claims about experience and the world, particularly since experiences themselves are neither true nor false; only expressions in language such as statements or propositions can be true or false. However, using language to report on and express what is experienced is an act of cognition, and that itself means that something has already been learnt or theorised, otherwise there would be no machinery available for such mediation between experience and the world in the first place. This implies that the mediating machinery also cannot be known noninferentially – it cannot be part of the foundations for knowledge. Thus, sense-data cannot be identified without some prior theoretical framework. Furthermore, if it is possible for cognition to be fallible in this task, then nothing can be justified from phenomenalist foundations.
In summary, the knowledge foundation of classical empiricism, as represented by the Justified True Belief account, cannot itself be a sensory experience, for sensory experiences are neither true nor false – only propositions are, and since the use of language is required to express the epistemology, cognitive mediation, prior learning, and theorising is necessarily entailed. Consequently, because of these difficulties, the classical empiricist theory of knowledge itself cannot be known noninferentially, that is, it cannot be self-referentially justified. This means that in terms of its own standards for acquiring and justifying knowledge, the epistemology is, according to its own tenets, unlearnable, and this inconsistency suggests a major incoherence (Evers and Lakomski, 1991, p. 6). The requirement that an epistemology be learnable is both basic and essential, since in specifying conditions for claims to count as reliable knowledge, a theory of knowledge implicitly embodies a theory of the powers of the mind, of which an empirically plausible theory of learning is a necessary and important part (Churchland, P.S., 1987, pp. 544-553).
Logical Empiricism
Despite these difficulties, and in part because of them, the central thesis of modern empiricist epistemology amounted to the claim that there are two kinds of things that can be known: first, the nature of the empirical world as derived through observation, such as matters of fact; and second, relations between ideas, such as matters of logic and mathematics. Propositions about the first domain are empirical, or ‘synthetic’, to use a term introduced by Kant, and are contingently true; and propositions about the second domain are ‘analytic’, and are necessarily true. Furthermore, there is a type of reasoning appropriate to each domain: first, observational and experimental reasoning is appropriate for matters of fact; and second, abstract (a priori) reasoning is appropriate for analytic matters. Hence, a clear distinction was drawn between observational and theoretical knowledge statements.
Following this development, a method was devised whereby theoretical inferences that were not themselves directly justified by observation could be considered as admissible knowledge statements. The solution was to have a framework in which inferential knowledge statements would be secured by induction from empirical statements. This move was facilitated by the development of mathematical and symbolic logic systems, which provided technical means, known as ‘correspondence rules’ for bridging the gap, and attaching theoretical statements to the sense-data observations of the empirical world. This approach to the meaning of theoretical terms was also thought to be appropriate for dealing with the unobservable posits of science, such as quarks, curved space-time, centres of gravity, and so on, which are at scales that are either far below, or are otherwise invisible to, what can be observed during the course of our everyday interactions with the world.
This innovation meant that instead of trying to justify knowledge claims by deducing them directly from empirical foundations, with the help of the new mathematical logic, deductive relations could work the other way around (Russell, 1972, pp. 79-87). In other words, observations statements would be deduced from the statements of a theory under specified experimental conditions, and if the observations subsequently made then matched up with predictions, then the theory in question could be taken as confirmed. This approach was known as the hypothetico-deductive mode of justification, or the deductive-nomological theory of explanation (Hempel, 1965, pp. 246-249). Finally, a theory of meaning - essential for distinguishing genuine scientific statements from pseudoscientific ones - was derived from this conception, and was known as the verification theory of meaning. On this view, if claims to knowledge incorporated ethical or value statements, and if these were unable to muster empirical support in the way of content, then they could not be considered significant or meaningful, and would fall outside the epistemology’s domain of consideration. As such, value claims were deemed to be empirically unverifiable, and therefore no more than subjectively motivated theoretical claims, unknowable either directly or derivatively (Ayer, 1975).
Nevertheless, logical empiricism was fraught with complex philosophical and technical problems. Not least of these is the problem of induction, which was first recognised by Hume, and later in the context of logical empiricism, by Popper. Hume’s argument was that moving from a case where it is stated, in effect, that ‘all observed x’s are y’, to the claim that ‘all z’s are (therefore) y’, is an inference that is not logically entailed unless an additional premise – a principle of induction – is invoked to justify it. However, in terms of phenomenalist foundationalism there is nothing in the way of evidence that would count as support for the principle of induction itself. To sustain the inference that some principle of induction holds requires the assumption of an additional premise, such that nature is uniform, or that the future is similar to the past in certain ways. Thus, to justify belief in some principle of induction beyond what is provided by past and present observations requires the circular and invalid assumption of such a principle, since it cannot justifiably follow from the sort of epistemology provided by foundationalism.
Popper’s argument was that an inductively drawn conclusion that a particular hypothesis or theory has been confirmed is one that is always vulnerable to defeat by new evidence involving as little as only one future counter-instance, or refutation, of what the hypothesis or theory in question generally anticipates (Popper, 1995, pp. 33-65). Popper attacked the belief that hypotheses could be confirmed or verified as unscientific and too simplistic, for it is always possible to find confirming instances of any theory, if confirmations are all that are sought. For Popper, theories that are properly scientific are conceivably refutable, that is, they are not the result of confirming observations, but rather are tentative conjectures or proposals that arise from an existing and uncertain frame of reference, or framework of expectations and interests, which to the extent that they survive critical empirical tests, take us forward to a better understanding, or theory, of reality. On this view, the growth of knowledge proceeds not by accumulating instances of confirmation, but by justifying knowledge in a more indirect way, through a process of conjecture and refutation, where falsified theories are replaced by new and hopefully better conjectures that meet further tests with greater success, and so on. Popper called this method by which a solution to a problem is approached the method of trial and error, and in avoiding the problem of induction, his theory of knowledge shows itself to be synonymous with a general theory of learning, which has implications for how an epistemology can itself come to be known or justified.
No First Philosophy
Despite the force of Popper’s unorthodox arguments, developments elsewhere in the philosophy of science were to deepen and extend the complexities of epistemology profoundly, and to alter the general direction of debate. Drawing on work by Duhem (Duhem, 1953, pp. 235-252) Quine pointed out that falsifying a hypothesis was not a simple and straightforward logical process, because whole networks of theory rather than single hypotheses alone face scrutiny when falsifying tests or experiments are performed (Quine, 1951, p. 38; 1969, p. 79; Quine and Ullian, 1970). In other words, every hypothesis is accompanied by a number of auxiliary hypotheses, or assumptions, and that when tested any one of these could be false. Thus, there is no such thing as a crucial or absolutely conclusive once-and-for-all falsifying experimental result, as a test for any hypothesis is relative to the background assumptions involved. Hence, the empirical consequences that follow from the testing of a hypothesis are consequences of the whole theoretical network that supports the hypothesis in question. Another complexity was the realisation that observation statements on their own tell us very little about the empirical world, as they too are always embedded in a much wider network of statements, many of which have no direct connection with our senses. Thus, it is whole theories that are the basic units of meaning, and this is referred to as the network theory of meaning (Churchland, P.S., 1992, pp. 266-267).
Consequently, as theoretical wholes, all observations are theory-laden. This implies that what we observe is not privileged as a source of knowledge, in the sense that it is incorrigible and immune from revision. Hence, observations cannot be the absolute foundation of science, and of reliable knowledge. We cannot, therefore, appeal to empirical adequacy as the sole criterion of epistemological adequacy, as claimed by logical empiricism. Thus, there is no reliable or certain a priori source of knowledge, or first philosophy, which functions as some Archimedean point outside of science, from which scientific theories can be pronounced as acceptable. Rather, our knowledge of the world is made up of a richly interconnected whole, or seamless web, of theoretical statements. This ‘web of beliefs’ (Quine and Ullian, 1970) also does not neatly divide up into scientific beliefs and non-scientific beliefs, observation statements and theoretical statements, or facts and values (Evers, 1988, pp. 5-7, 10-11). One consequence of this view, is that science is self-conscious common sense, and that when we come to alter our theories in the light of experience, we use our best existing scientific knowledge to assist us with the process of revision or replacement. Thus, we use our best existing science to bootstrap our way to better theories that are more comprehensive, powerful, elegant, and simple, etc. (Churchland, P.S., 1992, pp. 264-265).
Coherence Justification
If the idea of having an indubitable foundation for knowledge is untenable, and if varieties of relativism are equally so - if for no other reason than that the issue of knowledge justification either lapses entirely, or is so weakened that little epistemic value remains, for a general discounting of justification makes problematic the question of why some theories are much better than others in solving problems, or are better at making predictions, or fulfilling expectations, and so on - consideration of some form of coherence theory of knowledge appears unavoidable, and indeed possible (Williams, 1980, p. 243; 1977; BonJour, 1985). Since empirical adequacy alone is insufficient for justifying claims to knowledge, or for adjudicating between the merits of rival theories, and is therefore only one criterion amongst others of comparable importance, the task of theory evaluation becomes a broader matter of comparing the global or systematic virtues of competing alternatives, in which choice is guided by the ‘superempirical virtues’ that theories possess. These virtues entail considerations of simplicity, consistency, conservatism, comprehensiveness, fecundity, explanatory unity, refutability, and learnability, which collectively constitute features of coherence justification (see Quine and Ullian, 1970, pp. 42-53; Evers and Lakomski, 1991, pp. 4, 37; Churchland, P.M., 1993, p. 146).
The value of these particular virtues may be outlined briefly. Conservatism is important because the less rejection there is of knowledge that we have sound reason to accept, the more plausible the hypothesis in question, all things being equal. Comprehensiveness or generality acquires its virtue from explanatory breadth, that is, by explaining more rather than less phenomena, and in this respect is closely related to fecundity, which measures the range of phenomena that a theory can account for. Comprehensiveness is also related to explanatory unity, for theories that bring an underlying conceptual link or commonality to the understanding and solution of a problem, do better at generalising this knowledge from past experience to new cases in the future. Simplicity or economy functions by requiring the least explanatory apparatus to do the job of accounting for the widest range of phenomena possible, and in this regard, it has a close relationship with the virtue of comprehensiveness. Refutability is a virtue because without it a theory cannot be said to predict or explain anything. Its value is measured by the cost of retaining a theory in the face of falsifying evidence. The virtue of learnability requires that theories cohere with our best scientific accounts of human cognition and how we are able to acquire knowledge in the first place, and that these accounts are not inconsistent with other reliable bodies of knowledge that go to make up our global scientific world view. In this regard, the virtue of consistency can be viewed as being the key to coherence (see Quine and Ullian, 1970, pp. 3-11, 42-53; Evers and Lakomski, 1991, p. 9; Churchland, P.M., 1993, pp. 222-228).
The superempirical virtues are therefore a measure of the global excellence of a theory, and are relevant to an estimate of its comparative advantages and disadvantages over other contenders. Furthermore, the strategies and criteria that the brain uses for recognising and organising information, that is, for sifting out noise from meaningful information, rests on values such as simplicity, coherence, and explanatory power. On this view, theories cannot be measured against each other in any absolute sense – it is only possible to compare the relative merits, or respective global virtues, of competing accounts, so that a judgement can be made that one theory is better than, or more coherent than, another. In practice, this is a difficult and complicated matter, however the following set of rules for theory preference can be used as a mechanism for facilitating the selection of the best from a number of competing explanatory theories:
If T1 and T2 are competing theories in need of comparative evaluation, and all other things are equal, we should prefer T1 to T2 if:
T1 is simpler than T2.
T1 explains more than T2.
T1 is more readily testable than T2.
T1 leaves fewer messy unanswered questions behind than T2.
T1 squares better with what we already have reason to believe than does T2 (Lycan, 1988).
Epistemology Naturalized
Since an epistemology is itself a set of knowledge claims, our understanding of it, and of science itself, are therefore corrigible, and questions as to how it is that we can come to acquire knowledge and to revise our convictions are – to the extent that human beings are counted as part of the physical universe – at bottom empirical questions about the natural world. Without a first philosophy or secure foundation for knowledge, an epistemology must therefore embody the most powerful and sophisticated theories of learning and knowledge acquisition that our best sciences provide, for justifying and explaining in a self-referential way, how scientific knowledge is possible. Hence, in specifying the conditions for knowledge justification, an epistemology implicitly embodies a theory of mind (Evers and Lakomski, 1991, pp. 6, 8). On this view, epistemology becomes naturalised, and falls into place within the wider fabric of our scientific knowledge, as a chapter of psychology. Consequently, there is a reciprocal containment of epistemology in natural science, and of natural science in epistemology (Quine, 1969, pp. 82-83).
In specifying conditions for knowledge claims to count as justified, an empirically plausible theory of perception, learning, memory, representation, and cognition is essential. In the case of foundational epistemologies on which empiricist conceptions of knowledge and science have been based, the processes of learning and perception were presumed to occur via the receipt of sensory impressions, and cognition was assumed to be a matter of the logical manipulation of these impressions (Evers, 1991, p. 527). Thus, the traditional conception of knowledge essentially viewed the representation of theories as consisting of sets of sentences, or propositions, in which stated laws or generalisations, and statements about the context at hand, together provided the framework for explanation and/or the deduction of predictions. On this sentential model, changes and updates in the overall set of one’s beliefs occurred when some observation or theoretical deduction supplied a new belief to the overall structure of one’s set of belief statements. Thus, rationality could be represented as a set of formal rules for the addition, deletion, and manipulation of belief statements. In such a conceptualisation, the ultimate virtue of a theory rested in truth.
However, recent developments at the fast moving edge of a cluster of interacting scientific disciplines, which together inform what is today known as ‘the new cognitive science’ (see Allix, 2000), are gradually uncovering an entirely new and somewhat different understanding of the nature of mind, brain, and knowledge, which has, amongst other things, profound implications for our understanding of not only ourselves as ‘wild epistemic engines’ (Churchland, P.S and Churchland, P.M., 1983), but the new discipline of knowledge management. Recent advances in fields such as computational neuroscience, cognitive neurobiology, and connectionist artificial intelligence have generated novel understandings of the fundamental principles of brain structure and function, and along with these developments, revisions to our conventional theories of knowledge. The nature of the discoveries in these fields are such that work in the philosophy of science is now no longer able to proceed without their input, for new insights into the principles of brain representation and computation have consequences for the whole enterprise of epistemology itself.
This reformulation in our understanding should not be too surprising, for it has occurred before in the history of philosophy. Indeed, the growth and evolution of knowledge itself has driven this transformation (Hacking, 1975). For instance, in the seventeenth century ideas were seen to be the objects that linked the Cartesian ego (the internal world of subjective experience) with res extensa (the outside world), and as can be seen with the development of theories of knowledge, these have since been replaced with the sentence as the thing that represents reality in a body of knowledge. So ingrained is this view, that today, in what is referred to as ‘explicit’ knowledge the same basic pattern of thought prevails, in that such knowledge is usually, although not always, associated with what people are able to express about their experience in a declarative manner, such as a written or verbal statement. In other words, explicit knowledge is seen to be codifiable, or expressible in some symbolic representational form, such as the spoken or written word.
Brain, Mind, and Knowledge
The sentential view of knowledge has been so influential in twentieth century epistemology that most researchers in the field of artificial intelligence (AI) have modelled their computational programs on the assumption that the administration of intelligent behaviour consists of the manipulation of a sequence of symbols according to a set of rules. Hence, on this view, human intelligence, adaptation, and learning consist of appropriate changes or updates to our store of symbolic representations, or beliefs, as a function of experience. However, these assumptions, which have underpinned the ‘classical’ approach to AI research, have from the beginning been continuously thwarted and frustrated by many obstacles and intractable problems. These include failure to emulate realistically the cognitive and behavioural skills of humans, and other non-linguistic animals, in effortlessly recognising and responding to patterns embedded in complex and noisy stimulus fields; and the brittleness and inflexibility that AI systems manifest in coping satisfactorily with imperfect, partial, or ambiguous information. Furthermore, classical AI systems have been unable to accommodate the subtlety and complexity of context-dependent knowledge, which in effect has limited them to very restricted and narrow domains of application (Bereiter, 2000, pp. 226-238; Coveney and Highfield, 1996, pp. 126-130).
These difficulties are not simply a reflection of the great complexity and scale of the task. Rather, they stem from conceptual and methodological considerations. In particular, questions concerning the physical realisation of cognitive representations and computations have been treated as being mostly irrelevant to the research concerns of investigators, who have focused instead on the features and functions of the cognitive ‘program’ that is assumed to operate in the human brain. The methodological upshot has been the so-called ‘top-down’ approach, which has largely disconnected itself from the possible pressures and constraints that scientific knowledge relating to brain structure and function might impose on theorising and research (Churchland, P.M. 1993, p. 156).
However, in the past couple of decades a ‘bottom-up’ approach has emerged with the development of a new class of computer programs called ‘artificial neural networks’ (ANNs), which has brought the tasks of simulating and systematically investigating how knowledge is represented and processed in the brain into reach, for ANNs mimic more accurately and realistically the way nature implements the cognitive and behavioural processes of living creatures. Hence, the computational architecture being explored is ‘based on considerations of how brains themselves might function’ (Rumelhart, 1989, p. 134). This branch of research is referred to as ‘connectionism’, or ‘connectionist AI’, and it has provided significant new concepts and tools for exploring the nature of mind and cognition, and some of the deepest questions in epistemology and the philosophy of science.
The co-evolution of the research disciplines that now inform the brain sciences has been such that cognitive science can now be said to possess a presumptive understanding of how the brain works. This includes an understanding of how the brain represents and processes information about the general features of the world, of how fleeting information about the here and now, and time and space is represented and processed, of how complex but coherent motor behaviour is generated, and of how the brain can modulate its own cognitive activities as a fluid and changing function of current interests and salient background information. However, of most significance, the new cognitive and brain sciences now furnish a coherent account of what it is for the brain to have and deploy a conceptual framework in the ongoing business of perceptual recognition and the guidance of practical behaviour (Churchland, P.M., 1998, p. 859; see also 1998a; 1995; 1993).
Connectionist Psychology
Connectionist models of the mind-brain are of philosophical and scientific interest because they make no use of the familiar sentential framework of cognition, and no use of the familiar framework of deductive and inductive inference. Rather, the brain is understood to represent the world by acquiring a well-tuned configuration of its approximately 1014 synaptic connections, and it is this vast matrix of connections, and the strengths or ‘weights’ of the various excitatory and inhibitory connections, which determines the framework of categories into which the brain divides the world. The fleeting features of the world are therefore represented by neuronal patterns of activation or excitation, which tend to fall into one or other of the categories that the brain has acquired from experience. Thus, connectionist research into brain structure and function suggests a conception of cognition in which the principle form of representation is a high-dimensional activation vector (pattern), and the principle form of computation is vector-to-vector transformation, as patterns of incoming information are interpreted by intervening matrices of configured synapses and populations of neurones, which collectively embody the brain’s acquired conception of the world (Churchland, 1993, p. 209). Hence, the brain’s knowledge is in the connections, and is implicit in its structure, rather than explicit in the states of its neuronal units themselves (see Rumelhart, 1989, pp. 135-136).
As activation vectors move through the brain’s processing hierarchy, such pattern-to-pattern transformations eventually yield activation patterns, and sequences of activation patterns across motor neurones, whose output patterns drive the activity of muscles rather than other neurones. Thus, the processes of pattern transformation, of category interpretation, and of appropriate responding, are functions that are typical of theories, for the function of the interconnected conceptual framework that the brain has acquired ‘is to produce and steer well-tuned behaviour’ (Churchland, P.M. 1995, pp. 90-91; 1993, p. 177). Hence, the theories, or knowledge that the brain acquires about the world in this way, are entirely sub-linguistic or sub-symbolic (Smolensky, 1988, pp. 1-74), and thoroughly pragmatic in nature.
This naturalistic view of knowledge has a number of important features, which shed light on a number of issues of concern to the theory and practice of knowledge management. Firstly, the brain’s high-dimensional representations of the world embody an enormous amount of (presumptive) information that is at a level of analysis far below the level of articulation typical of language. This property is a consequence of the massively parallel architecture and distributed nature of the brain’s processing networks, which enables the nervous system to make very fine-grained sensory and cognitive discriminations that are highly contextual, and that in their subtlety far exceed the analytic and descriptive representational capacities of language. These sub-symbolic representations of the world correspond to what is referred to as ‘implicit’ or ‘tacit’ knowledge, and when manifested in finely calibrated perceptual and motor skills, constitute what is known as ‘know-how’.
Secondly, these representations, or prototypical categories, can be activated by inputs that incorporate only a small part of the (presumptive) information that they embody. Such vector completion provides the brain with a capacity for perceptual closure, where it fills in and completes information that is missing in sensory input from the world. Thus, the brain is very good at ‘jumping to conclusions’ or exercising inductive inference when provided incomplete sensory input. The brain is also able to perform this task swiftly as information storage and processing are not separated, as is the case with digital computers. Massively interconnected parallel distributed processing (PDP) systems are content-addressable, and are therefore able to gain very rapid access to the total store of information embodied in a representation, even if the input pattern is distorted or only a partial fragment (Churchland, P.S. and Sejnowski, 1990, p. 232; Churchland, P.S., 1992, p. 406). Hence, past learning, and local embedding conditions at a particular point in time, triggers the brain to make an inference to the ‘best explanation’ that is has of the input phenomena at hand.
While having adaptive advantages and providing a basis for anticipation, prediction, and speculation about events in the world, this ampliative capacity of the brain also has a drawback, for in carrying substantial interpretive and predictive content representations are therefore fallible, and are subject to empirical criticism and correction. In this regard, prototype activation is akin to an inductive argument, in that there is more information contained in the conclusion than in all the antecedent premises combined. In having this ‘knowledge expanding’ characteristic, inductive arguments, like activated prototypes, are uncertain and only probabilistic. Hence, unlike deductive arguments, in which the truth of the premises makes a false conclusion impossible, the possibility for error is ever present in the epistemological nexus between reality and inductive representation (see Giere, 1979, pp. 34-38; Quine and Ullian, 1970, p. 58). However, this feature of prototype representations is consistent what is known about learning and the growth of knowledge generally (Popper, 1995).
Thirdly, learning occurs when the brain acquires - through repeated exposure to varied instances of relevant environmental stimuli, and a steady calibration of its myriad synaptic connections through the feedback of error - a representation about something in the world, which when activated by a relevantly similar input produces an appropriately finely-tuned response. The test for a brain educated in this way is whether it can respond correctly to some new set of relevant inputs, which are similar in their general statistical properties, or features (Crick, 1994, p. 190). If the brain responds appropriately, then it can be said that the ‘knowledge’ it has acquired and stored in its vast array of synaptic connections has generalised successfully to new cases (Churchland, P.M. 1993, p. 167).
Another cognitive skill made possible by the brain’s capacity for recurrently and cyclically processing patterns of activity through its substrates, is interpretive plasticity. This allows the brain to deal with confusing and ambiguous input from the environment, and/or information originating at higher computational levels in the brain that re-enter downstream levels of the processing hierarchy via ‘descending’ or ‘reentrant’ pathways (Edelman, 1989), and which serve to bias, tilt, or steer cognitive processes along a range of possible pathways. Interpretive plasticity therefore stems from the intrinsically theory-laden nature of perception and recognition (understanding), which along with the ability for recurrent computation (thinking), is the engine by which the brain generates novel associations and possibilities for thought and action, or a capacity for ‘creativity’ and ‘innovation’.
Thus, intelligence may be construed as being more than merely a matter of responding appropriately to a changing environment. Rather, an intelligent system, be it an individual, social, or organizational entity, is one that is capable of exploiting information and energy in a way that increases the information it embodies, and possibly the internal physical ordering and organization that it has, in relation to its environment. Hence, on this view, learning turns out to be an essential feature of intelligence (see Churchland, P.M. 1988, pp. 173-174). It therefore follows from this that intelligent organizations must also be organizations that learn.
Neurophilosophy of science
From an epistemological point of view, cognition therefore consists of the activation of recurrent pattern processing vectors, which enable the brain to recognise some situation, which may be otherwise partial, unfamiliar, ambiguous, puzzling, novel, or problematic in some way, as an instance of something that is well represented by an existing prototype, and its associated categories. Activation entails completing input vectors that are incomplete or partial, and in the process imposing some structural order on the content of incoming information. Hence, prototype activation, or the ‘insight’ of recognition and understanding, brings additional information to bear on inputs in an ampliative manner. In triggering more information than is present in the input alone, prototype activation enables the brain to construct an anticipatory and speculative hypothesis, and to make some sort of adaptive sense of the case at hand in its particular environmental context, and to predict aspects of the situation that are not yet perceived, so that it can respond accordingly (see Churchland, P.M., 1993, pp. 208-212; 1995, pp. 114-117; Churchland, P.M. and Churchland, P.S., 1996, pp. 278-279).
Ampliative understanding therefore consists of the total sum of shared structure that the brain has extracted from its varied ‘education’, and encoded in its numerous synaptic connections (McClelland et al., 1995, p. 428). The brain's capacity for massively parallel distributed processing permits acquired prototype activation vectors that have some complex of relational or structural features in common to cluster together or unite in the same region of some high-dimensional and abstract representational space to form a prototypical hot spot, which makes the brain extremely sensitive to similarities along all relevant stimulus dimensions. From a dynamic point of view, a hot spot functions as an ‘attractor’ that draws a wide range of similarly related cases towards it, rather like the gravitational field of a black hole sucking in surrounding stellar matter. Hence, the virtues of simplicity and conceptual unity play an important and related role in any adequate epistemology, for they facilitate superior generalisation by generating the simplest possible hypotheses about what structures might lie hidden in, or behind, various input vectors (Churchland, P.M. 1993, pp. 179-181, 204-206, 228).
From this perspective, the unit of knowledge, and of understanding, is something that is not represented in the brain in the form of an explicit set of codifiable symbols, such as a set of sentences, statements, or propositions about the world. On the prototype model outlined here, knowing or comprehending something consists of having a grasp of certain paradigmatic kinds of situations and processes, and of possible variations thereof. Hence, acquiring knowledge and understanding in some domain entails becoming familiar with various contextual states and causal processes, which together constitute the features identified by the relevant learned prototype. On this view, the evaluation of knowledge is therefore not a matter of logical consistency with observation sentences, or inductive inference or confirmation therefrom, as demanded by logical empiricism. Rather, the virtue of a theory in prototypical form rests in the many uses to which it is put. Thus, as a collection of perceptual, explanatory, manipulative, and other associated abilities embodied in the synaptic configurations of the brain, evaluation becomes a pragmatic matter, rather than a purely logical or formal one (see Churchland, P.M. 1995, pp. 271-286). Hence, if the unit of representation in the brain is not a sentence or proposition, then the virtue of any theory in prototypical form ‘will be something other than truth, and its relation to the world will be something other than reference’ (Churchland, P.M., 1998b, p. 42). Consequently, how any given theory is evaluated will depend on the context of its application, the aims and interests of the cognitive agents concerned, and the kinds of solutions that are thought to be valuable, useful, or plausible to the case at hand, which together boil down to an overall goodness-of-fit in satisfying a complex set of soft constraints (see Rumelhart, 1989, pp. 142-147; Bereiter, 1991, pp. 10-16). Since there are a range of dimensions along which individuals are bound to differ in any given instance, evaluation will necessarily entail a complex process of assessment and negotiation, in which the superempirical virtues will unavoidably play a crucial guiding role in settling on the best global account of the situation in question.
The role of language in the cognitive economy
One of the insights of the new cognitive science is that powerful non-symbolic and distributed representations in the form of appropriately trained sensory-motor neural maps in the brain, underlie much human expertise, knowledge, and judgement. On this view, symbolic or codified forms of knowledge, such as language, become a rather superficial and conventional representation of a way of understanding in some problematic context. Language may therefore be construed as ‘a surface abstraction of much richer, more generalised information processes in the cortex, a condensation fed to the tongue and hand for social purposes’ (Hooker, 1975, p. 217). In this respect, linguistic or symbolic formulations of knowledge reduce the richness of experience into more compact and sparse forms of representation. Hence, the linguistic representations of valid law-like generalisations in a scientific theory can be seen to function as compression algorithms, which economically condense vast amounts of information into a single symbolic formula, or a collection of such formulae (Evers and Lakomski, 1993, p. 145; 2000, p. 18). Thus, symbols are parsimonious semantic representations of one or more kinematically and dynamically richer general prototypes that occur in the brain. As such, they are well suited as mediums of exchange in complex institutional contexts, which depend on external and public representations for sharing, extending, and enhancing theoretical and practical capacities (Churchland, P.M. 1993, p. 224; Churchland, P.M. and Churchland, P.S. 1996, p. 226). As summaries of experience, linguistic/symbolic representations are most realistic when describing relatively invariant contexts. However, as representations of, and as guides to practice, the value of compression algorithms diminish where varied and complex contextual factors predominate (see Evers and Lakomski, 2000, p. 18).
Language and other forms of symbolic representation therefore makes collective cognition possible, and enables humans to address and solve problems that would otherwise be insoluble to solitary individuals, for the vocabulary of an inherited language constitutes an abstract template that narrows down an individual’s search space during learning or problem solving. Furthermore, in its spoken and written forms language constitutes a form of extrasomatic memory, through which the collective and accumulated learning of a culture can be effectively passed on from one generation to another (Churchland, P.M., 1995, p. 270). As such, language and other external artefacts and representations of collective learning may therefore be construed as ready-made cognitive ‘prefabricates’, which when internalised facilitate socialisation (see Goldberg, 2001, p. 52). Language also reduces the complexity of conceptual structure by pulling together many concepts under one symbol, making it possible to establish increasingly complex concepts, and to use them to think at levels of abstraction that would otherwise be impossible (Damasio, A.R. and Damasio, H., 1992, p. 63). Thus, linguistic representations may be said to constitute human knowledge in an objective, independent, and collective sense, with which the knowing subject interfaces (Hacking, 1975, p. 187). However, linguistic representations do not constitute the knowing subject’s understanding in the first instance, for understanding as prototypical experience in multiform vectors and vector sequences in the brain, antedates the development of language, and is something quite distinct from it. Consequently, human understanding resides primarily and originally within the brain, and therefore an adequate account of this reality is a prerequisite for sustaining a coherent account of knowledge management concepts and practices.
The embodied and embedded brain:
situated cognition and action
Since the body and its sensory machinery is an indispensable frame of reference for mind and cognition, the mind is therefore not just embrained, but is also profoundly embodied (Damasio, A.R., 1996). Furthermore, since cognitive representations of any kind are known to induce corresponding physiological responses in the organism that has them (the well-known galvanic skin response (GSR) is an example of this), thereby creating a sense of the biological self-concept for the creature, feelings associated with subjectivity and emotion therefore constitute an integral component of the machinery of cognition and reason. These feelings qualify our perceptions, modify our comprehensions of the world, and are therefore just as cognitive as any other perceptual or cognitive image or experience. The functional advantages of such emotion-laden embodied cognition and reasoning are as follows: firstly, it confers an epistemic advantage by alerting an organism to the salience of particular appetitive or aversive features in a complex environment that are critical to its survival; and secondly, embodied cognition allows an organism to reduce the number of alternatives that need to be considered by sifting out options that are relevant to its interests, thereby also increasing the accuracy and efficiency of deliberative and decision processes (Damasio, A.R., 1996, pp. 133, 173-175). Embodied cognition therefore is crucial to both practical reasoning, and the normalcy of explicit and declarative knowledge in making real life choices and decisions. Hence, having appropriate feelings may be essential for the skillful application of prototypical concepts and theories in complex social and practical situations (see Churchland, P.S., 1998, pp. 248-251).
Thus, embodied cognition functions to assign different values to the decision options that individuals - embedded in the reality of their physical and social contexts - actually make. Hence, ‘body, regulation, survival, and mind are intimately interwoven’ (Damasio, A.R., 1996, p. 123). This conclusion therefore appears to be at odds with Descartes’ famous claim ‘Cogito ergo sum’, which had the effect of separating what was assumed to be an innately rational mind (res cogitans) from a non-thinking and physically extended body (res extensa) (Descartes, 1968, pp. 53-60). The perspective on mind, cognition, and knowledge afforded by the new cognitive science suggests rather that the opposite relationship would be more appropriate (Damasio, A.R. 1996, pp. 247-250). Consequently, the error in Descartes’ formulation of knowledge might best be corrected by asserting ‘I am, therefore I (can) think’.
Because embodied prototype activation vectors are responsive to specific stimulus profiles consisting of real-valued elements and features of high-dimensionality, which are computed by transforming activation vectors through a series of massively parallel soft constraints to a correspondingly finely-tuned output, cognition and behaviour is intrinsically situated, or context-dependent (see Brown, Collins, and Duguid, 1989, pp. 32-42; Vera and Simon, 1993, pp. 7-48; 1993a, pp. 77-86; 1993b, pp. 117-133; Greeno and Moore, 1993, pp. 49-59; Agre, 1993, pp. 61-69; Suchman, 1993, pp. 71-75; Clancey, 1993, pp. 87-116). Furthermore, because memory is ‘content-addressable’ (Churchland, P.M. 1998c, pp. 268-269), in that representational content is not physically separated from computational processes in the brain, and is accomplished simultaneously and recurrently, cognition has the immediacy of a situated experience. This property of cognition, along with embodied emotions and feelings, confers distinct advantages to organisms that have evolved in contexts where existence is often precarious and the demands for adaptation and survival beckon relentlessly. Success in responding to these demands has required intelligent adaptive organisms, such as humans, to draw in parallel not only on acquired experiential knowledge of a situation, but also on the wider context of acquired social and cultural knowledge (see LeDoux, 1994). However, social and cultural knowledge is not just of an explicit declarative kind, as in language and other sociocultural symbols, but is also distributed and manifested in the artefacts, technologies, and arrangements of the surrounding physical and institutional environment, which includes the embodied and embedded brains of other human beings (see, for example, Hutchins, 1995). Thus, human knowledge is physically and socially distributed in nature.
Just as humans are situated cogitators and actors they are also, therefore, situated learners. Hence, the history of our learning as a species, and the development of our cultures in the form of institutions, technologies, and practices, etc., is from the new perspective in cognitive science a set of practical responses, or solutions, to problems that have confronted and frustrated human existence. Thus, as an accumulated set of solutions to existential problems, a culture functions through organised learning and socialisation to shape and structure, in various subtle and context-dependent ways, the internal representational framework, and related patterns of responses, which identifies particular individuals and social groups as members of that culture, and that characterises the ways in which they live their social lives. In this light, culture in the widest sense, as it is lived and constructed at a particular historical place and time, can be conceived of as a set of characteristic behavioural and material dispositions, or physically encoded patterns of knowledge, which are embodied in the central nervous systems of the members of a given social group (Evers, 1989, p. 79), such as a particular community-of-practice, for instance (see Wenger, 1998; Brown and Duguid, 1996; 2000; 2001), and that are embedded in the surrounding manufactured environment.
The parallel and distributed nature of representation and computation in the nervous system, and its extension and distribution throughout the body and out into the features of the surrounding world, implies a view of cognition and culture that is intimately interwoven and related. On this view, cognition extends into various external structures and processes that lie beyond the craniums of solitary individuals. Cultural structures and processes can therefore be understood broadly as extensions of the mind designed to facilitate and augment individual and social capabilities, and the satisfaction of needs. As manifestations of culture designed to fulfil collective needs, institutions and organizations are therefore systems of 'distributed information processing and problem solving which embodies a variety of strategies of decomposition and coordination' (Evers and Lakomski, 2000, p. 85). In this regard, the most salient feature of organizational decomposition is the division of labour, which also implies a cognitive division of labour. An issue of fundamental importance to the management of knowledge therefore, is the question of how distributed divisions of knowledge in organizational contexts are to be coordinated optimally for organizational benefit and survival.
Implications for knowledge management
Recent work in epistemology, the philosophy of science, and the branches of inquiry comprising the new cognitive science, uncovers an understanding of knowledge somewhat different from conventional conceptions, which has consequences for familiar ways of thinking about knowing, and some general implications for knowledge management concepts and practices.
In the first of these, conventional distinctions between ‘knowing that’ and ‘knowing how’, or theory and practice, which roughly speaking correspond to explicit sentential or propositional knowledge on the one hand, and knowledge of skills on the other, dissolve from this perspective. From the point of view of the new cognitive science, all ‘knowing that’ is just a form of ‘knowing how’, for the task of knowing how to read or produce a particular linguistic string of words and sentences, or other symbolic tokens, and of knowing how to use concepts to deliberate, evaluate, manipulate, and predict for example, is for the brain still an entirely practical matter of processing prototype vectors, and sequences of such vectors. In this regard, Brown and Duguid (2001, p. 51) have noted that ‘… the distinction between “knowing how” and “knowing that” does not support a simple separation between practice and theory. Thinking, after all, is a kind of doing.’ Thus, a large amount of practical and professional knowledge is better construed as a matter of pattern processing rather than the logical manipulation of rule-based sentential structures. At best, language-like formulations of experience are useful to the extent that they can go proxy for experience (Evers and Lakomski, 2000, p. 35), however they cannot replace or instantiate the experience of knowing, and this has implications for the relative value that is attached to knowledge structured in static codified forms, and knowledge realised as a living, evolving, and dynamic process in organisational and knowledge management contexts.
On this matter, if value is attached to knowledge in the first sense, as information, then knowledge management
may be viewed as the management of codification and of codified expressions, or
‘captured’ forms of knowledge, which in themselves are static artefacts that
have no meaning, except that given them in their creation, and which require a
further act of cognition, or interpretation, before they can be used or have
some value extracted from them.
Whereas, if value is attached to knowledge in the second sense, then
knowledge management may be construed as the management of an ampliative
process, in which knowledge is created through learning, and used for dealing
with unfolding complexity and novelty, and in solving organizational
problems.
This difference in value orientation goes to the heart of a growing controversy and debate concerning the proper scope and application of knowledge management concepts and practices, particularly in terms of the narrow and problematic emphasis that has been placed on information management and technology-centric conceptions of the field (see Malhotra, 1997; McDermott, 1999; Spiegler, 2000). Davenport and Prusak (1998, pp. 4-5) characterise the nature of this problem as ‘the confusion of information – or knowledge – with the technology that delivers it,’ and they emphasize that ‘the medium is not the message’ in this context. Hence, there is a growing awareness that knowledge management is not solely a technology or information management problem, but is rather a socio-organizational and cultural process issue centred on the promotion of intelligent collaboration, active organizational learning, and innovation (see McElroy, 1999; Tiwana, 2000, pp. 10, 53, 58; Wick, 2000, pp. 517-518). However, in correcting for the deficiencies of conventional thought and action, the appropriate response is not to abandon or ignore the virtues of information systems and technologies for managing organizational knowledge, but to integrate them intelligently with human cognitive properties (see Lueg, 2001, pp. 151-159), and organizational learning cultures, in support of collaborative knowledge creation, sharing, and utilization processes.
Despite considerable investment, attempts to elicit, capture, and codify knowledge for expert systems have been problematic. One reason for this is methodological, in that systems have been designed and built on the assumption that knowledge is declarative and more or less ‘unitary’ in nature, when research indicates the contrary, that is, that expert knowledge is of several different kinds, in that it is represented and manifested in various sensory-motor modalities, and is mostly tacit. Therefore, rather than creating systems intended to replace human expertise, a better approach might be to develop systems that augment, complement, and support human proficiencies, and which compensate for inherent human fallibilities, weaknesses, and limitations. Hence, on this view, humans and computers would be better viewed as ‘joint cognitive systems’ (see Berry and Dienes, 1993, pp. 133-136).
Despite problems with an overloaded information perspective and its misplaced focus on the application of information technologies, formalised knowledge in linguistic structures nevertheless does serve important functions, which may facilitate and support the dynamics of knowledge management in organisations. Firstly, such knowledge is publicly accessible, which means that different people can scrutinise and check its validity and reliability, or otherwise. Secondly, the formal and logical character of language and codified systems of representation implies a universal applicability, in that people can come to learn something useful about a particular domain without necessarily having experience of the context to which it applies (Smolensky, 1988).
If knowledge is viewed from the broader, deeper, and richer perspective of the new cognitive science, then it is clear that much that is of value to organizations consists of the vast reservoir of implicit knowledge that people embody as intellectual assets, including the explicit or declarative knowledge that they possess, which is also ultimately tacit in its origin. In knowing ‘more than we can tell’, propositional knowledge is ‘only the tip’ of the knowledge iceberg (Sveiby, 2000, p. 21), in which the much larger submerged portion is the implicit dimension of tacit and practical knowledge. Thus, the technology approach to managing knowledge is only scratching the superficial and visible surface of a much more complex and less discernable reality. Tapping into this reality requires approaches to organizational communication, problem solving, and learning that are active and engaging (see Berry and Dienes, 1993; Kuhne and Quigley, 1997; Robinson, 1995; 1996).
However, a word of caution is necessary here, for there is a corollary to the claim that we know 'more than we can tell', in that psychological research shows that because we do not have direct access to our (implicit) internal states, and are therefore for the most part unaware of them, we are also prone to confabulation or 'telling more than we can know' (see Nisbett and Wilson, 1977, pp. 231-259; Wilson, Hull, and Johnson, 1981, pp. 53-71; Wason and Evans, 1975, pp. 141-154; Churchland, P.S., 1983, p. 83). Hence, claims to knowledge deriving from experience, and questions about the relevance of such knowledge in any given context cannot simply be taken at face value, or be assumed to be known a priori, as the history of epistemology makes clear, for the evaluation of such (theory-laden) knowledge requires careful and coherent empirical scrutiny and assessment. Consequently, the evaluation of knowledge in organizational contexts is no less complex than the nature of the knowledge itself, and sound knowledge management practice therefore requires the development of organizational infrastructures and methods that are adequate to the tasks of appraising such knowledge and the conditions from which it arises, and in which it is manifested. Argyris and Schön (1997, pp. 28-29) have referred to this vital organizational capability and capacity as 'deuterolearning', or 'learning how to learn'.
The neurocomputational nature of knowledge representation, kinematics, and dynamics also suggests a number of strategies for managing processes of knowledge acquisition, sharing and transfer, and individual, social, and organizational learning and change. Firstly, it suggests that the organisation of education and training processes aimed at bringing about individual and social change in complex context-dependent tasks would be facilitated by preserving the realism of the tasks that have to be learnt in any given case. This implies that learning should occur in the situations or contexts to which it applies, or where this is not possible, in contexts where such learning can be accurately simulated. Secondly, where complex knowledge or skills of a practical kind need to be acquired or developed, a pedagogy of situated experience and learning, or ‘learning-in-working’ is warranted (Brown and Duguid, 1996, pp. 58-82). Moreover, where a practical learning task has a relatively complex or non-obvious underlying structure, verbal methods of instruction, and observation of other more experienced people performing the target task, are not as likely to generate the required sensory-motor performance and effectiveness in decision making, as will direct interaction with the task itself. Similarly, assessing such implicit skills requires providing people with the opportunity to demonstrate what they know (see Berry and Dienes, 1993, pp. 129-133). In applying these general principles of learning, new knowledge is created by dynamic cognitive and behavioural processes of search, inquiry, trial and error, experimentation, novel association, and intelligent adaptation in finding and generating adequate solutions or resolutions to the demands of particular problematic contexts.
Lastly, the continuity of mind with the body, or the embodied mind, with its complex structure of material interests and needs, contributes directly to the development of values, which determines in part the nature of perceived problems, and the acceptability of solutions to those problems. Thus, direct involvement becomes an important methodological principle for those most affected by problems for which solutions are sought. Furthermore, the socially distributed nature of much practical knowledge, and the overlapping nature of many problem domains, suggests that situated learning from experience, or action learning, be structured to occur collaboratively. The method of social organization and practical action employed in any given case is determined by what is learned or discovered about the set of empirical features and constraints that particular problems or problem clusters present, and the amelioration or solution of those problems are a function of the particular knowledge that emerges, or is sourced and utilized, in attempts to satisfy the constraints in question (see Robinson, 2002). Hence, methods of organization and coordination emerge and dissipate flexibly according to the empirical requirements of particular problem contexts, and their associated problem solving processes. Consequently, the nature of practical knowledge implies that those most affected or implicated in the need for change should be most directly involved and engaged in learning how to make the changes that are needed.
Conclusion
Examination of developments in epistemology, and in related disciplines and branches of inquiry, uncovers a conception of knowledge that in its scientific details and scope extends far beyond anything found in the contemporary literature on knowledge management. The overview of developments in the theory of knowledge provided here reveals a rich and substantial vein of hitherto untapped methodological resources from which theorists and practitioners alike may draw in explicating, extending, developing, and justifying knowledge management concepts and practices of value and significance to modern organizations, economies, and societies. The relative paucity of discussion, and the immaturity of existing debate in matters of epistemology, underscores a disconnection, or fracture, between current theorizing and research in the field, and ongoing intellectual and scientific developments in the area. This characteristic of the discipline stands out as a curious and disconcerting anomaly that does little to promote the growth of knowledge about knowledge, which is the very focus of knowledge management research and application. Thus, the argument presented here is remedial, for it aims to show that further advances in the field, of a genuinely scientific kind, are likely to hinge on the utilization of methodological resources that stem from the parent discipline itself, namely the evolving sphere of epistemology.
References
Agre, P.E. (1993) The Symbolic Worldview: Reply To Vera and Simon, Cognitive Science Vol. 17; pp. 61-69.
Allix, N.M. (2000) The Theory Of Multiple Intelligences: A Case Of Missing Cognitive Matter, Australian Journal of Education Vol. 44, No. 3; pp. 272-293.
Argyris, C., Schön, D.A. (1997) What Is An Organization That It May Learn? Organizational Learning II: Theory, Method, and Practice Addison-Wesley Publishing Company, Reading, Massachusetts.
Ayer, A.J. (1975) Language, Truth and Logic, Penguin Books, London
Bereiter, C. (1991) Implications of Connectionism For Thinking About Rules, Educational Researcher Vol. 20, No. 3; pp. 10-16
Bereiter, C. (2000) Keeping The Brain In Mind, Australian Journal of Education, Vol. 44, No. 3; pp. 226-238
Berry, D.C., Dienes, Z. (1993) Implicit Learning: Theoretical And Empirical Issues, Lawrence Erlbaum Associates, Hove, UK.
BonJour, L. (1985) The Structure Of Empirical Knowledge, Harvard University Press, Cambridge, Massachusetts
Brown, J.S., Duguid, P. (1996) Organizational Learning And Communities-of-Practice: Toward a Unified View Of Working, Learning, And Innovation, Organizational Learning, Sage Publications, Thousand Oaks
Brown, J.S., Duguid, P. (2000) Organizing Knowledge, in Smith, D.E. (Ed) Knowledge, Groupware, And The Internet, Butterworth-Heinemann, Boston
Brown, J.S., Duguid, P. (2001) Structure And Spontaneity: Knowledge and Organization, in Nonaka, I., Teece, D.J. (Eds.). Managing Industrial Knowledge: Creation, Transfer and Utilization, Sage Publications, London
Brown, J.S., Collins, A., Duguid, P. (1989) Situated Cognition And The Culture of Learning. Educational Researcher, Vol. 18, No. 1; pp. 32-42
Churchland, P.M. (1988) Matter and Consciousness: A Contemporary Introduction to the Philosophy of Mind, A Bradford Book, The MIT Press, Cambridge, Massachusetts
Churchland, P.M. (1993) A Neurocomputational Perspective: The Nature of Mind and the Structure of Science, A Bradford Boo,.The MIT Press, Cambridge, Massachusetts
Churchland, P.M. (1995) The Engine of Reason, the Seat of the Soul: A Philosophical Journey into the Brain, A Bradford Book, The MIT Press, Cambridge, Massachusetts.
Churchland, P.M. (1998) Précis of The Engine of Reason, The Seat of the Soul: A Philosophical Journey into the Brain, Philosophy and Phenomenological Research, Vol. 58, No. 4; pp. 859-863
Churchland, P.M. (1998a) Conceptual Similarity Across Sensory And Neural Diversity: The Fodor/Lepore Challenge Answered, The Journal of Philosophy, Vol. 95, No. 1; pp. 5-32
Churchland, P.M. (1998b) Activation Vectors vs. Propositional Attitudes: How the Brain Represents Reality, On The Contrary: Critical Essays, 1987-1997 Paul M. Churchland and Patricia S. Churchland, A Bradford Book, The MIT Press, Cambridge, Massachusetts
Churchland, P.M. (1998c) A Deeper Unity: Some Feyerabendian Themes in Neurocomputational Form, On the Contrary: Critical Essays, 1987-1997 Paul M. Churchland and Patricia S. Churchland, A Bradford Book, The MIT Press, Cambridge, Massachusetts
Churchland, P.M., Churchland, P.S. (1996) The
Churchlands And Their Critics, in
McCauley, R.N. (Ed.) Basil
Blackwell, Ltd. Oxford. Cambridge, Massachusetts.
Churchland, P.S. (1983) Consciousness: The Transmutation Of A Concept, Pacific Philosophical Quarterly, Vol. 64; pp. 80-95
Churchland, P.S. (1987) Epistemology In The Age Of Neuroscience, The Journal of Philosophy, Vol. 84, No. 10; pp. 544-553
Churchland, P.S. (1992) Neurophilosophy: Toward A Unified Science Of The Mind/Brain, A Bradford Book, The MIT Press, Cambridge, Massachusetts
Churchland, P.S. (1998) Feeling Reasons, On the Contrary: Critical Essays, 1987-1997 Paul M. Churchland and Patricia S. Churchland, A Bradford Book, The MIT Press, Cambridge, Massachusetts
Churchland, P.S., Churchland, P.M. (1983) Stalking The Wild Epistemic Engine Nous, Vol 17; pp. 5-18
Churchland, P.S., Sejnowski, T.J. (1990) Neural Representation And Neural Computation, in Lycan W.G. (Ed) Mind And Cognition: A Reader, Basil Blackwell, Cambridge, Massachusetts
Clancey, W.J. (1993) Situated Action: A Neuropsychological Interpretation Response to Vera and Simon, Cognitive Science. Vol. 17; pp. 87-116
Coveney, P., Highfield, R. (1996) Frontiers of Complexity: The Search for Order in a Chaotic World, Faber and Faber, London
Crick, F. (1994) The Astonishing Hypothesis: The Scientific Search for the Soul, Touchstone Books, London
Damasio, A.R. (1996) Descartes’ Error: Emotion, Reason and the Human Brain, Papermac, London
Damasio, A.R., Damasio, H. (1992) Brain and Language. Scientific American. Special Issue: Mind and Brain, Vol. 267, No. 3; pp. 62-71
Davenport, T.H., Prusak, L. (1998) Working Knowledge: How Organizations Manage What They Know, Harvard Business School Press, Boston
Descartes, R. (1968) Discourse on Method and the Meditations, Translated by F.E. Sutcliffe,. Penguin Books, London
Duhem, P. (1953) Physical Theory and Experiment, Readings in the Philosophy of Science, in Feigl, H., Brodbeck, M. (Eds.), Appleton-Century-Crofts, New York
Edelman, G.M. (1989) The Remembered Present: A Biological Theory of Consciousness, Basic Books, New York
Evers, C.W. (1988) Educational Administration and the New Philosophy of Science, The Journal of Educational Administration, Vol. 26, No. 1; pp. 3-22
Evers, C.W. (1989) Theory Competition and Intercultural Articulation: Methodological Reflections on Louts and Legends: Essay Review, Educational Philosophy and Theory, Vol. 21, No. 1; pp. 78-82
Evers, C.W. (1991) Towards a Coherentist Theory of Validity, in Beyond Paradigms: Coherentism and Holism in Educational Research, International Journal of Educational Research, Vol. 15; pp. 521-535
Evers, C.W., Lakomski, G. (1991) Knowing Educational Administration: Contemporary Methodological Controversies in Educational Administration Research, Pergamon Press, Oxford.
Evers, C.W., Lakomski, G. (1993) Exploring Educational Administration: Coherentist Applications and Critical Debates, Pergamon Press, New York
Evers, C.W., Lakomski, G. (2000) Doing Educational Administration: A Theory of Administrative Practice, Pergamon Press, New York
Giere, R.N. (1979) Understanding Scientific Reasoning, Holt, Rinehart and Winston, New York
Goldberg, E. (2001) The Executive Brain: Frontal Lobes and the Civilized Mind, Oxford University Press, Oxford
Greeno, J.G., Moore, J.L (1993) Situativity and Symbols: Response to Vera and Simon, Cognitive Science, Vol. 17; pp. 49-59
Hacking, I. (1975) Why Does Language Matter to Philosophy? Why Does Language Matter to Philosophy? Cambridge University Press, London
Hempel, C.G. (1965) Aspects of Scientific Explanation: And Other Essays in the Philosophy of Science, The Free Press, New York
Hooker, C.A. (1975) Philosophy and Meta-Philosophy of Science: Empiricism, Popperianism and Realism, Synthese, Vol. 32; pp. 177-231
Hutchins, E. (1995) How a Cockpit Remembers Its Speeds, Cognitive Science, Vol.19; pp. 265-288
Kuhne, G.W., Quigley, B.A. (1997) Understanding and Using Action Research in Practice Settings, Creating Practical Knowledge Through Action Research: Posing Problems, and Improving Daily Practice. New Directions for Adult and Continuing Education No. 73, Spring 1997, Quigley, B.A., Kuhne, G.W. (Eds.), Jossey-Bass, San Francisco
LeDoux, J.E. (1994) Emotion, Memory and the Brain, Scientific American, Vol. 270, No. 6; pp. 32-39
Lueg, C. (2001) Information, Knowledge, and Networked Minds, Journal of Knowledge Management, Vol. 5, No. 2; pp. 151-159.
Lycan, W.G. (1988) Epistemic Value, Judgement and Justification, Cambridge University Press, New York
Malhotra, Y (1997) Knowledge Management in Inquiring Organizations, Proceedings of 3rd Americas Conference on Information Systems (Philosophy of Information Systems Mini-track), Indianapolis, IN, August 15-17; pp. 293-295
McClelland, J.L., McNaughton, B.L., O’Reilly, R.C. (1995) Why There Are Complementary Learning Systems in the Hippocampus and Neocortex: Insights From the Successes and Failures of Connectionist Models of Learning and Memory, Psychological Review, Vol. 102, No. 3; pp. 419-457
McDermott, R. (1999) Why Information Technology Inspired But Cannot Deliver Knowledge Management, California Management Review, Vol. 41, No. 4
McElroy, M.W. (1999) Second-Generation KM: A White Paper, IBM Knowledge Management Consulting Group.
Nisbett, R.E., Wilson, T.D. (1977) Telling More Than We Can Know: Verbal Reports on Mental Processes, Psychological Review, Vol. 84, No. 3; pp. 231-259
Nonaka, I. (1994) A Dynamic Theory of Organizational Knowledge Creation, Organization Science, Vol. 5, No. 1; pp. 14-37
Nonaka, I., Takeuchi, H. (1995) The Knowledge Creating Company: How Japanese Companies Create the Dynamics of Innovation, Oxford University Press, Oxford
Nonaka, I., Toyama, R., Konno, N. (2001) SECI, Ba and Leadership: A Unified Model of Dynamic Knowledge Creation, Managing Industrial Knowledge: Creation, Transfer and Utilization, Nonaka, I., Teece, D.J. (Eds.), Sage Publications, London
Popper, K.R. (1995) Conjectures and Refutations: The Growth of Scientific Knowledge, Routledge, London
Quine, W.V.O. (1951) Two Dogmas of Empiricism, The Philosophical Review, Vol. 60; pp. 20-43
Quine, W.V.O. (1969) Epistemology Naturalized, Ontological Relativity and Other Essays, Columbia University Press, New York
Quine, W.V.O., Ullian, J.S. (1970) The Web of Belief, Random House, New York
Robinson, V.M.J. (1995) Organisational Learning as Organisational Problem-Solving, Leading and Managing, Vol. 1, No. 1; pp. 63-78
Robinson, V.M.J. (1996) Problem-Based Methodology and Administrative Practice, Educational Administration Quarterly, Vol. 32, No. 3; pp. 427-451
Robinson, V.M.J. (2002) Organizational Learning, Organizational Problem Solving and Models of Mind, Second International Handbook of Educational Leadership and Administration, Leithwood, K., Hallinger, P. (eds.); pp. 775-812
Rumelhart, D.E. (1989) The Architecture of Mind: A Connectionist Approach, Foundations of Cognitive Science, Posner, M.I. (Ed.), MIT Press, Cambridge
Russell, B. (1972) Logic as the Essence of Philosophy, Readings on Logic. Second Edition, Copi, I.M., Gould, J.A. (Eds.), The Macmillan Company, New York
Smolensky, P. (1988) On the Proper Treatment of Connectionism, Behavioral and Brain Sciences, Vol. 11; pp. 1-74
Spiegler, I. (2000) Knowledge Management: A New Idea or a Recycled Concept? Communications of the Association for Information Systems, Vol. 3, Article 14
Suchman, L. (1993) Response to Vera and Simon’s Situated Action: A Symbolic Interpretation, Cognitive Science, Vol. 17; pp. 71-75
Sveiby, K.E. (1999) The Tacit and Explicit Nature of Knowledge, The Knowledge Management Yearbook 1999-2000, Butterworth-Heinemann, Boston
Takeuchi, H. (2001) Towards a Universal Management Concept of Knowledge, Managing Industrial Knowledge: Creation, Transfer and Utilization, Nonaka, I., Teece, D.J. (Eds.), Sage Publications, London
Tiwana, A. (2000) The Knowledge Management Toolkit: Practical Techniques for Building a Knowledge Management System, Prentice Hall PTR, New Jersey
Vera, A.H., Simon, H.A. (1993) Situated Action: A Symbolic Interpretation, Cognitive Science, Vol. 17; pp. 7-48
Vera, A.H., Simon, H.A. (1993a) Situated Action: Reply to Reviewers, Cognitive Science, Vol. 17; pp. 77-86
Vera, A.H., Simon, H.A. (1993b) Situated Action: Reply to William Clancey, Cognitive Science, Vol. 17; pp. 117-133
Wason, P.C., Evans, J.ST.B.T. (1975) Dual Processes in Reasoning? Cognition, Vol. 3, No. 2; pp. 141-154
Wenger, E. (1998) Communities of Practice: Learning, Meaning, and Identity, Cambridge University Press, Cambridge
Wick, C. (2000) Knowledge Management and Leadership Opportunities for Technical Communicators, Technical Communication, Fourth Quarter
Williams, M. (1977) Groundless Belief: An Essay on the Possibility of Epistemology, Basil Blackwell, Oxford
Williams, M. (1980) Coherence, Justification, and Truth, The Review of Metaphysics, Vol. 34; pp. 243-272
Wilson, T.D., Hull, J.G., Johnson, J. (1981) Awareness and Self-Perception: Verbal Reports on Internal States, Journal of Personality and Social Psychology, Vol. 40, No. 1; pp. 53-71
About the Author:
Nicholas Michael Allix, Lecturer, Faculty of Education, Building 6, Monash University, Victoria, 3800, Australia Tel: +61 3 9905 9198; Fax: +61 3 9905 2779; Email: Nicholas.Allix@education.monash.edu.au
Dr. Allix’s background experience includes: corporate human resource management and development; educational administration in private sector vocational education; knowledge management in public and human services sectors; and academic positions as former Fellow and Research Associate in the Centre for Organizational Learning and Leadership at the University of Melbourne, Australia; and current Lecturer in leadership, policy, and change at Monash University, Melbourne, Australia.
Professional and research interests include: cognition, leadership, the creation and utilization of knowledge in policy contexts (knowledge management), and the coordination of learning, intelligence, innovation and change in administrative contexts (learning organization).