ISSN 1393-614X Minerva - An Internet Journal of
Philosophy Vol. 7 2003.
Raising an ontological question regarding the meaning of 'a being' and also the meaning of an 'intelligent being', Heidegger identifies intentionality with the skilful coping of a social, norm bound, engaged and context dependent embodied being. This he describes in terms of a Dasein, a being-in-the-world, and its tool using activity with respect to social practices and norms. Unlike Husserl's, intentionality in Heidegger is primarily semantic: the necessary conditions of skilful coping are also the necessary conditions of intentional acts. The entire question of computers attaining Dasein-like character is largely dependent on whether these purposeful causal laws can also be formalized. While Dreyfus rules out this possibility, Mark Okrent successfully argues that there is nothing in Heidegger that rules out the possibility of computers attaining a Dasein-like character. While in full agreement with Mark Okrent, I have made an attempt at understanding the entire debate with more emphasis on the implications of a Dasein attaining a computer-like character.
What I intend to do in this article is to make a comparative study of Hebert Dreyfus and Mark Okrent's philosophical observations as presented in two respective articles: 'What Computers Can't Do?' and 'Why The Mind Isn't a Program (But Some Digital Computer Might Have a Mind)?' In light of this, I have accepted Okrent's (1996, online) observation that the claim made by Dreyfus, 'that in the light of Heidegger's interpretation of 'intentionality', it is at least highly unlikely for digital computers to ever satisfy Heideggerian constraints, and thus count as thinking', is not strong enough. I have made an attempt to justify my observation, which is contrary to what Dreyfus has said (1997, p.108) in some occasions, that it is more difficult to accommodate machine intelligence in the Husserlian framework. On the contrary, Okrent has sufficiently justified that more than Husserl's, Heidegger's programme could accommodate machine intelligence into its fold. For the later part, I concentrate more on Husserl's 1893 phase of philosophical development.
Mark Okrent, in his article, 'Why The Mind Isn't a Program?' has referred to some of the Heideggerian constraints, which Dreyfus has considered as prime requirements for any thinking entity to undergo. These are: (i) the thinking entity must be 'in-the-world' which makes it a contextualized and an embodied entity, and (ii) the thinking entity must have a minimum understanding of how to cope successfully in the world environment. The skill and the practical coping ability, the 'know how', rather than theoretical, detached and disembodied reflection, are learnt by adopting successful social practices. This suggests that Heidegger puts forward a set of constraints for computational model of thought, which is acknowledged in the field of cognitive science. For example, Bechtel and Abrahamsen in Connectionism and the Mind: "Thus it provides hope of situating cognitive processing in the world, and so begins to elucidate what Heidegger may have had in mind when he emphasized that our cognitive system exists enmeshed in the world in which we do things, where we have skills and social practices that facilitate our interaction with objects" (Bechtel and Abrahamsen 1991, p.126). Any thought model, to be similar to our conscious model of thought, must incorporate those essential requirements. This, Dreyfus seriously believes, is impossible for a digital computer, which manipulates formal symbols in accordance with a set of rules organized in a program, to ever satisfy the Heideggerian constraints.
Interestingly, Mark Okrent, while accepting the first part of Dreyfus's interpretation of a set of conditions provided by Heidegger on what it is to think, differs from Dreyfus's other observations that these conditions could not be satisfied by program-driven digital computers. Okrent strongly believes that there is nothing in Heidegger, which counts against the possibility of ascribing thoughts to computers. To reinterpret Heidegger in the light of his new findings, Okrent focuses on two basic questions: (1) what it is to be a thinker? And (2) which actual entities might count as thinkers? The first issue concerns the 'question of being', and the second concerns the matters of fact. What qualifies an entity to be a thinking entity is for Heidegger an issue of being, an ontological question that is necessarily tied to the question of the meaning of being. But how ought we to understand what it is to think? If we consider rationality to be the necessary and sufficient condition for an entity to be intelligent or as thinking, what motivates this is the recognition that some machines, manipulating formal symbols in accordance with a set of pre-defined rules, have proved successful in behaving in a rational manner. In this sense, such a machine qualifies to be a thinking entity. On the other hand, if what it is to think cannot be captured in any program, is it possible for a rational entity so defined to independently satisfy those other requirements laid down by Heidegger that a thinking entity must undergo?
Mark Okrent's rereading of Heidegger's work leads him to a different conclusion. Okrent believes Heidegger does leave open the possibility that some computers could actually think in spite of the fact that necessary requirements laid down by him for any intelligent being, 'being-in-the-world', 'skills', and social practices, are not a set of behaviours which can be defined in terms of formal symbols. Heidegger follows Husserl and Brenteno and defines consciousness in terms of intentionality. Here, unlike Husserl's identification of intentionality with the nature of the act, Heidegger concentrates more on the question of being. Heidegger accepts the basic requirement of intentionality as laid down by Husserl and Brentano, its object directedness, but with more emphasis on the question of 'being'. Ultimately the question of being is 'what is the most general way to understand and how it is possible to intend something?' (Okrent 1996, online) So what a 'thinking being' means is to be determined by investigating that which characterizes that type of entity. Heidegger has made a shift from an articulation of the question of being to an articulation of the character of intentionality. The crux of the question is: what conditions must be satisfied when some event is correctly described in intentional terms? The meaning of an intentional entity is related to the meaning of that general entity, that being, in Heidegger's terminology, Dasein, which has that intentional state and also some other states.
The essential character of Dasein is its 'being-in-the-world', acting purposefully in a goal directed way, and using tools as tools in pursuing its goals. In general Dasein does what it is socially appropriate for it to do, using tools as tools the way they should be used, which is defined by usual habitual practices, in a norm governed way. Dasein's 'being-in-the-world' involves a care structure, it is ahead of itself, being alongside other entities etc., which are necessary for having intentional states and these are expressed in several ways: cultivating and caring something, holding something, letting something go etc., the common denominator of which is 'concern'. All of these are instances of the overt behavior of embodied persons, described in intentional terms. Instead of making intentional states central from which overt acts are derived, Heidegger takes the intentionality of overt acts as primary and of inner states as secondary. So the necessary condition of a goal-directed intentional overt act is its practice, appropriately chosen, learning proper manner of tool use, and since engaging in a practical activity is a necessary condition for having intentionality, a person engages in a practical activity if he intends tools as tools by using them. For example, if it is correct to describe a certain event as hammering, that it is directed towards the goal of a nail being made fast, it is correct to say that the agent is treating the object being used as a hammer, and the entity as a nail by using it as a nail, so the primary way to intend hammer as hammer is by hammering it, by intending tools as tools. Now the question is: what are the necessary conditions for intending tools as tools? Rather than using them in an arbitrary manner, they should be used the way society prescribes a definite way of using them, ascribing a function which is a rule to use them.
Accommodating a wider scope for intentionality criterion in terms of Dasein and its tool using capacity, and identifying intentionality with that semantic meaning, Heidegger has waived the consciousness requirement and thereby there is no way in which the mind is a program in the sense that a program defines what it is to have a mind. This is not because programs are syntactic and a mind has semantics. All that Heidegger has to say here is the fact that what it is to think cannot be defined in terms of programs; however it does not follow that entities which act in a programmed way could not be counted as thinking entities. At this juncture, Okrent pursues the implications of a dialogue which Dreyfus initiated with the ghost of Alan Turing, the fact that Turing was willing to admit that "it is not possible to produce a set of rules purporting to describe what a man should do in every conceivable set of circumstances" (Turing 1950, P.441). From this above premise it follows that no set of formal rules could ever specify what it is to have a mind in the Heideggerian sense of acting in accordance with some set of rules describing what should be done in every conceivable circumstance. So no set of formal rules could specify what it is to have a mind. Now the question is: could a machine be doing what it should, where what it should do is determined by social practices? Could such a machine have Dasein-like character?
From what Turing is willing to accept as a premise in the above argument, it does not follow that what a person should do can never be expressed in any set of rules. Dreyfus responds that it does not follow from the fact that the behavior of human agents follows causal laws that these laws could be embodied in a computer program. But it also surely does not follow that these causal laws could not be embodied in computer programs, and then a machine is surely expected to do everything that a person does in that similar situation, physically described. In that case the machine attains Dasein-like character; it is an entity, which has intentional states. Given Heidegger's views, this cannot be the case. On the one hand his Husserlian background still keeps room for some form of consciousness as a requirement for intentionality, and that machines following formal rules cannot possibly be conscious. But this possibility could not be explored further as for Heidegger consciousness is not necessary for intentionality, rather in having intentions one must do what one should do and what one should do cannot be formalized, then one cannot argue from "A and B do the same things, physically described", to "doing the same thing intentionally described", since there could be no lawful relationship between these two, so from the fact that A and B do the same thing physically described, one can not infer "A and B both are Daseins" (Okrent 1996, online).
For Heidegger, intenationality requires a relation between an entity and the social practices; if one alters that social context, one is altering the intentionality part, even if the physical description of the act remains the same. This logical independence of these two phases of description does not rule out the possibility of behavior of some entity satisfying both these descriptions. All one has to do is actually building such an entity which could satisfy both these descriptions, coming up with a set of rules which adequately describe our own behavior, which under another description is also skillful coping with environment by acting in accordance with social practices. Even if what qualifies a thinker as a thinker is not acting in accordance with rule manipulating symbols, it does not follow that some computers, or symbol manipulators, could not also be thinkers. What it is to be a 'thinking being', is an ontological question. The question of the being of a thinking entity is incompatible with the hypothesis that the mind is a computer. Since this is not relevant, the later phase, whether some computers can also have a mind, is a logically separate question, even if the qualification for thinking is not that it behaves as some computers would, does not rule out the possibility that they would also behave the way a thinking entity should.
My second contention is regarding the non-computational model of intentionality provided by Husserl. In his ‘Philosophy of Arithmetic’ (1891), Husserl deals with the concept of numbers and also the psychological origin of the concept. He first describes the 'genuine presentations' of the concepts of unity, multitude, number, etc, and in the second part he defines the so-called symbolic representations. The genuine presentation is the same as the intuitive or insightful presentation in which the intended object is itself given. But this intuitive insight is restricted to a very small group of entities, as for a large set of hundreds or thousands of objects, the second presentation, of signs and symbols, is the only device. But the question is: how are these symbolic devices to be elucidated? This is the basic question of his ‘Philosophy of Arithmetic’. While there was no question regarding the arithmetical device resulting (objectively) in the truth, what he wanted to explore was the justification part, whether these devices are also (subjectively) justified, that is, whether the devices can be performed with evidence, rather than merely blindly or by vote. It cannot be denied that the devices applied in the art of calculation do indeed result in truth. The ‘Philosophy of Arithmetic’ aims at justifying them, which should be the aim of the 'true philosophy of the calculus', the desideratum of centuries. According to Husserl, calculation, and all other higher forms of mental life in general (cf. especially 1890, p.349f) etc. are based on "mental mechanisms" (Munch 1996, p. 199 - 210), "Arithmetical devices are neither typically applied with evidence, nor have they been invented on the basis of arithmetical insights: our mind, in using them, becomes rather like the working of a machine, which uses "blind mechanical" or "logical–mechanical" devices (1890, p.364); i.e., the unelucidated processes that we normally use, are the result of a kind of "natural selection". It is, he says, 'in the struggle for existence that the truth was won (1890, p.371).' The description of these natural devices is the first task of Husserl's programme for elucidating them. However, this description of the natural mechanical processes, according to Husserl, is only the first stage in the full elucidation of symbolic representations. In a second, constitutive stage, the stage of justification, a logical device parallel to the psychological mechanism (a parallelaufendes logisches Verfahren 1890, p.359ff.) is to be developed. In this stage abstract algorithmical equivalents will be developed as well as rules for testing and inventing them.' (Munch 1996, p. 199)
Thus the theories advocated by Husserl in his pre-1890 theories of mind came close to the computational model of thought. However, the initial difficulty of providing a special theory of symbolic knowledge providing insight and illumination as necessary criteria was overcome by his introduction of 'intentionality' over intuitions. In 1893 Husserl introduced a new concept, of intention or representation, which is contrasted with the concept of intuition. Now intuitions are those phenomena that are given directly or without mediation, while in the case of intention our interest is directed towards an object, which is not intuited at the same time. Intentions or representations always intend a fulfilment, which is supplied by intuitions. Until this concept was introduced symbols served as surrogates, the signs are not numbers but they represented numbers because of the fact that the digits belonged to a sign-system that mirrors the structure of numbers. It is therefore, by virtue of the structure, the syntax, that surrogates mean something, otherwise these signs are blind. In representations as intentions, Husserl discovered a psychological bond between the sign and the intuition in which the represented object is given. The sign is no longer a blind object but an 'Anhalt', a hold for meaningful act. He defines representation in ‘Psychological Studies’ (1894) as 'merely intending', and the dichotomy of the ‘Philosophy of Arithmetic’ is broken. On the strength of this newly found intentionality concept, Searle argues that consciousness is necessary for semantic content and that syntax is insufficient for consciousness. As Dier Munch (1996) sums up, '…Thus Husserl anticipates not only a programme of cognitive simulation, but also its criticism'.
Let me recapitulate the basic requirements laid down by Husserl and by Heidegger which must be fulfilled by any entity to be properly regarded as thinking. Husserl's dissatisfaction with the mode of calculative rationality and the use of inauthentic concepts expressed a genuine urge on his part to safeguard the human dimension of thought. He saw danger in machine intelligence replacing human intelligence when thought was identified with efficiency and with speed and on 'economy of thought which does not require much human effort.' But with its inherent blindness, it allows a powerful thinking without insight and illumination, without any need for self-justification. This notion can be evident when expert computer systems sometimes behave in an absurd manner when they have been provided with insufficient data. For Heidegger, this 'blindness' belongs to the essence of science. According to Husserl, for science to be true science, it must escape from the pressing demands of everyday praxis; for Heidegger, the entanglement of theory and praxis shows that for finite, human Dasein, there is no such thing as pure theory. "Whereas Husserl interprets the narrowing of science to mere calculation as a loss of the original ideal of science, Heidegger holds that the dominance of calculative thought reveals the very essence of science. Where Husserl speaks of correcting all these short-comings of science by a willful assuming of another attitude, Heidegger alludes to a completely other form of thought, which can neither be willed, sought after, nor mastered" (Buckley 1992). In the Heideggerian framework, Dasein is called to respond to Being and Being's giving of itself in our time is through revelation in the form of technology. In a very specific sense, then, Dasein can be said to be responsible for technology. "Rather technology is the Gestell, which itself enframes everything, including Dasein. For this reason, Heidegger is able to claim that the essence of technology is 'nothing human.' This framework is not possible without humanity, but it reminds something beyond complete human control" (Buckley 1992).
In his response to Being, Dasein is responsible for that which to some extent is beyond Dasein. Being responsible in this manner implies a fundamental vulnerability on Dasein's part. We are responsible only for that which we can predict and control. This is quite contrary to the traditional sense of responsibility that we are responsible for that which comes from us, not for that which comes from afar, irrespective of the fact that there are many unforeseen consequences of our actions, which are beyond our control. We are responsible for those people or things which are given to us, not only to those whom we can control or predict." To be sure, such thinking is often closely allied with talk about responsibility, but this is a pseudo-responsibility, a responsibility for that which I can control, but not for that that might place me under its spell. Far more suitable responses might be gratitude, or remorse – resuming in the request for forgiveness. Without doubt, the great difficulty which many philosophers have with Heidegger's own involvement with National Socialism is not just the involvement itself, but Heidegger's quasi-calculative defence that nobody could foresee the course that National Socialism was going to take. Perhaps he genuinely could not. But this in no way lessens responsibility and the obligation of a correct response – which in this case could only be humility and remorse. Such a response was never forthcoming" (Buckley 1992). Dasein's sense of responsibility is to respond to what comes from afar and to assume the care for that which it cannot master. Husserl's centrality to self-responsibility and the centrality for self-awareness and consciousness in any intentional act make human intelligence different from machine intelligence. A responsible man, an entity which has a mind, is not just capable of having calculative and predicative ability, but to an insightful way in which one is engaged with a dialogue with oneself, seeking justification with the willingness to question oneself, to seek evidence for that which one believes.
There was need for this critical quest even in Heidegger as his ‘Being and Time’, published in 1927, still carried the legacy of Husserlian and also the Hegelian emphasis on the primacy of thinking. With his central focus on 'how to think?', Heidegger made experiments with thought and with truth though for him it was an exploration of another kind of thinking: 'To think is to confine yourself to a Single thought that one-day stands Still like a star in the world's sky' (Quoted in McCann, 1979).
What is distinctive in us as thinkers is our ability to ask questions in our search for meaning and authenticity of life and existence. For Heidegger, the most vital question is the question of Being,' what it means for something to be?' Our encounter with the question of Being discloses the practical nature of thought in its intimate relation with historicity and existence. Heidegger sought to unveil the nature of thinking of the earth-bound man who is ruled not by the image of the sun, not by the light of reason per se, but by the logic of life depending on idiosyncratic circumstances of the moment for insight and practical wisdom. Dasein is neither a subject against an object, nor a man differing from a machine or a computer, but an understanding of how to use tools as tools. Its membership to its kind is crucial than its personal or non-personal dimensions. As Dasein is a kind of equipmental understanding that appropriately deals with equipments using tools as tools, anything could attain a Dasein like status provided it has a coping ability with mechanical commitments to rules which are already codified either in the wisdom of a tradition or in a rule book. For Heidegger Being question is related to our ability to make sense of things, an ability to interpret. Being is not a phenomenon that could be grasped in intuitive insight. We conduct our activities with a vague understanding of Being which is in the horizon of our ability to make sense of making sense. Husserl saw the inherent blindness of machine intelligence which he sought to overcome with due recognition to the logical and the rational dimension of thinking within his transcendental Phenomenology. For him entities in their mode of being could be rendered present as a phenomenon through categorical intuition and in rational insight. This light of reason is not a distinctive mark of machine intelligence or of mindless everyday coping skills in the background of socialized context of everyday practices. For Husserl there is threat to reason, responsibility and to morality if there is complete replacement of transcendental with the ontological dimensions of phenomenology.
My initial question was: beginning with this outline of Heidegger's Dasein, could one ascribe Dasein like character to a cyber being? For Dreyfus computers would never be able to act intentionally since it acts only in a programmed way and it would be impossible for a computer or for a cyber being to successfully cope with its environment unless all the variables of a context are programmed. Contrary to what Dreyfus says, with the advances made in technology and also in the field of AI, a robot of the most sophisticated construction could be programmed to display better coping abilities than humans. If that is what characterizes human intentionality then these robots are better qualified to be Heideggerian Dasein in terms of better coping skills, better than any human grandmaster could ever display. "Kasparov ultimately seems to have allowed himself to be spooked by the computer, even after he demonstrated an ability to defeat it on occasion. He might very well have won it if he were playing a human player with exactly the same skill as deep blue (at least as the computer exists this year). Instead, Kasparov detected a sinister stone face where in fact there was absolutely nothing. While the contrast was not intended as a Turing Test, it ended up as one, and Kasparov was fooled" (Lanier, online). That a machine could attain a Dasein like character is now no longer an issue for me. What I am interested in finding is what is that which is distinctively human, that which would have made Kasparov look for a human face which could not come from his opponent displaying all coping abilities and all the known techniques of a skilled player? Why that stone face of his opponent could make him so unsure of himself? For me, this is related to a more vital question regarding another dimension of meaning of Being. My question now is, 'What makes our coping abilities distinctively human?'
We as humans have the capacity to learn from our mistakes, we can commit wrongs and can repent for those wrongs, we may continuously ask the being question in order to redefine our stand and to remake ourselves, the qualities which humans alone are required to possess in a match for equality between humans and machines. "This is what makes our being distinct, this specific style of coping. It is a style consisting of unknowns and knowns, of past and future, of stumbling not gliding. It is our combination of Heidegger's big words, disclosed and undisclosed that characterize our way of coping our being. Confusing gibberish it is, but in many ways it is confusing gibberish that dominates what we are and therefore is a major part of what it means to be a Dasein" (Frey 1999). For us the real defeat comes not from a machine acting smarter than us, it is a defeat that comes when we surrender our distinctively human style of coping imbibing a style that is alien to us. Heidegger's late philosophy was a move toward a mystical dimension, more for mechanical submission to the moods of the commune than to reflect and research, more with an urge to be seized than to seize, for a kind of conversion than rational persuasion. His ideal Dasein became more machine-like with blindness to those emotions, which are necessary for us to cope with a style that is distinctively our own. The real threat comes if Dasein attains a machine-like character wearing a mask of a sinister stone face which is a real threat to its own being and to its authentic mode of being in the world.
Bechtel, W. and A. Abrahamsen (1991). Connectionism and the Mind. Oxford: Basil Blackwell.
Buckley P. (1992). Husserl, Heidegger and the Crisis of Philosophical Responsibility. Dordrecht: Kluwer.
Dreyfus H. L. (1997). 'Being—in—the–world'. The MIT Press. Cambridge.
Frey C.H. (1999). Cyber-being and Time,Website: http://www.spark-online.com/december99/discourse/frey.htm [Accessed 4 August 2003]
Lanier J. Why the Deep Blues, Website: http://www.anti-feminism.com/chess.htm [Accessed 2 August 2003].
McCann C. (Ed.) (1979). Martin Heidegger: Critical Assessment, Volume 1. Routledge.
Munch D. (1996). 'The Early Husserl and Cognitive Science'. In: Baumgartner, E., Hrg., Handbook - Phenomenology and Cognitive Science, mit Illustrationen von Edo Podreka, Dettelbach: Röll.
Okrent, Mark. 1996. 'Why The Mind Isn't a Program (But Some Digital Computer Might Have a Mind)', The Electronic Journal of Analytic Philosophy, Website: http://ejap.louisiana.edu/EJAP/1996.spring/okrent.1996.spring.html [Accessed 10 February 2003].
Turing, A. M. (1950). 'Computing Machinery
and Intelligence.'Mind LIX: 433-460. Dr. Archana Barua teaches philosophy at
the Indian Institute of Technology, Guwahati. Her major areas of interest are
Phenomenology of Religion and Phenomenology and Cognitive Science.
All rights are reserved, but fair and good faith use with full attribution may be made of this work for educational or scholarly purposes.
Dr. Archana Barua teaches philosophy at
the Indian Institute of Technology, Guwahati. Her major areas of interest are
Phenomenology of Religion and Phenomenology and Cognitive Science.