Commentaries: "LLM has a representation of care, it doesn't care"
An interview with Terry Winograd by Harry Halpin
After meeting at a seminar on AI and philosophy at the Googleplex in 2016, recently Harry Halpin of Nym Technologies interviewed Terry Winograd, the Stanford computer science professor who is famous for being one of the foremost AI researchers to both embrace a Heideggerian critique of AI and for supervising the research of Sergey Brin and Larry Page on what later become Google. After his embrace of a Heideggerian critique of AI, Terry Winograd went on to co-found the d.school for design thinking at Stanford, the Computer Professionals for Social Responsibility, and the Program on Liberation Technologies at Stanford.
Harry Halpin: What is your current take on the new wave of generative and data-driven artificial intelligence?
Terry Winograd: So, I’ll start with my usual disclaimer, which is that I am not a technical expert on current AI technology. The second disclaimer is that if you had asked me five or six years ago if it would be possible to get this kind of performance out of machine-learning in neural networks, I would say “of course not: There just isn’t enough there.” I was wrong. I was as amazed as anybody else at how much it seems like the computer is being intelligent. The third disclaimer is my own history of having worked in GOFAI (“Good Old-Fashioned AI”), which had a representational, logical basis: The way you get intelligence in a machine is to represent the things we can say and do and then have calculations over those representations. If you read my book with Fernando Flores, Understanding Computers and Cognition: A New Foundation for Design,[1] there is a disclaimer that says that our critique, and the critique of Hubert Dreyfus,[2] are critiques of representational AI. If there is some other kind of machine that doesn’t use explicit representations but uses other methods, we don’t have anything to say about that. We don’t know. The arguments we are making are against a certain style of AI, but the possibility of any kind of machine being intelligent—or seeming intelligent—that’s a different question.
Having said that, I want to fall back on linguistic ambiguity, as I don’t believe there is a precise definition of what it means to be intelligent. Are dolphins intelligent? Depends on how you think of intelligence, right? So, like all terms in natural language, intelligence is used in a spectrum of ways which deals with different aspects and different contexts—especially different contexts. Recently I got a new car, and everything is intelligent: intelligent parking assistant, intelligent collision avoidance ... is the collision avoidance system intelligent? It does help stop your car if you are going to hit something. So, I don’t get worked up over the question “Is it really intelligent?” as I think the phrase “really intelligent” represents a misunderstanding of how language works. Does it behave in a way which people interpret as intelligent? Or as being like another person? Then the answer is “yes,” this sort of AI was done fifty years ago. Getting people to see something as intelligent is the natural tendency our brains have as we tend to project ourselves onto the world. So, I think it’s clear that current AI is doing way better than any previous round of AI at doing things which, if a person did the same thing, would be considered intelligent. As a practical question, would AI replace workers, where AI generates our business emails for us rather than us having to hire someone to generate business emails. The answer is yes, AI is doing well enough that there are a lot of tasks like that. I have always wondered about all this fuss about autonomous cars: If they do better than people, is that good enough? Or if autonomous cars have one crash, do you throw them all away? It’s certain that AI is both doing a pretty good job of driving and generating emails.
So now going back to the question: Is generative AI a model of human intelligence? Clearly not. Human intelligence works with very small amounts of data and the success of the current deep learning depends on very large amounts of data. The fact that you say its “neural” and so on is only how deep learning was historically motivated but it is in no way a model for how human brains work. Do we care? That goes back to the original AI discussion that took place when I was a student: Why are we doing AI? There is one view that says we want to use AI as a tool to understand how human intelligence and cognition works. Another view says we want AI to do things that impress us, to be like humans. There was always a tension between the goals. Back when I was a student, Newell and Simon said that researchers in AI wanted to use the computer to figure out human information processing. Minsky took the opposite view, an engineering point of view: see what you can get AI to do. We believe we can get it to do all the things people do, and it doesn’t matter if AI is doing it the same way or a different way than humans.
HH: To return to the debate over intelligence, you can take what Dennett called an “intentional stance” towards AI to some extent; AI can produce behavior that appears to be produced intelligently by a functionalist reading or some sort of behaviourist reading. As you point out, there is still the problem of the poverty of the stimulus, as named by Chomsky, as the amount of data required to train these technologies far exceeds that required by humans. Is AI about building models of intelligence, whether they are analogous to our biological workings or not?
To revisit the question of AI in terms of declarative semantics and the procedural semantics, you helped build systems based on procedural semantics that were very successful in the first wave of AI, leading eventually to the KRL (Knowledge Representation Language) system.[3] So what was the resolution of this debate between procedural and declarative semantics?
TW: It went away because the whole paradigm sort of faded. There were logicians like Pat Hayes, and their view is that anything is real knowledge if you can state and declare it in the form of logic. Minsky’s view was that if a program does something interesting, then the program has knowledge precisely because the program can do something interesting. The knowledge is in the doing, the calculating. Those were two different representational styles. In the end as representational AI waned people just didn’t care about that debate.
HH: And one of the reasons it waned is people start thinking about sub-representational AI, such as the work on neural networks of Rosenblatt and the original logical system of imminent neural activity of McCulloch and Pitts. Yet there was also the reception of Hubert Dreyfus and—via Dreyfus— Heidegger on AI. We’ve heard that you read Being and Time and taught a course on it. Can you describe the impact of intelligence which is non-representational, particularly your later work after you moved from AI to design? More broadly, how did Heidegger and Dreyfus impact your work?
TW: The first thing is to be very clear: I’m not a Heidegger scholar. I don’t read German. I have skimmed and bounced off of various of his writings in translation, but realistically what influenced me was Dreyfus’s interpretation of Heidegger. If you say “Did Heidegger really mean this” or “Was Dreyfus right or wrong?” I don’t know. Yet I knew what Dreyfus said because I could understand him, and so he had a huge influence, as he introduced that whole world of thinking in a way that I had never before encountered. All kinds of cognitive philosophy courses back in that era did not touch phenomenological philosophy at all.
The history of my own personal life is that I had created a well-known program, SHRDLU.[4] Then I worked on KRL, and there were a variety of things which were getting in the way of KRL working well, some of which were technical and so had nothing to do with philosophy but rather machine memory and so on. The philosophical aspect became clear after I met Fernando Flores and Hubert Dreyfus—I actually met Dreyfus before Fernando—at an AI discussion group which had students from Stanford and Berkeley. I don’t remember the exact details, but those meetings is how I got to know Dreyfus and Searle. Then I started a series of meetings where Danny Bobrow at Xerox PARC (who is unfortunately now gone), and I and— a couple of students—would go up for lunch with Dreyfus, Searle, and some of their students. That was basically the encounter. There were people present who thought AI was good and people who thought AI was just wrong-headed. Dreyfus put it simply[JW1] , “Dreyfus told us AI was misled, Searle told us AI was ontologically impossible, and Weizenbaum just told us AI was immoral.” Searle and Dreyfus had many back and forth discussions, raising questions about what are we doing and whether it is going to work. Then Fernando Flores ended up going into a interdisciplinary PhD at Berkeley with Searle on his committee, Hubert Dreyfus’s brother Stuart Dreyfus, Ann Markussen from Economics, and myself from computer science. So his thinking crossed all those disciplines and pulled them together, and I started working with him on writing our book, Understanding Computers and Cognition. At the start, we worked on writing a monograph, but it got bigger and bigger and so we decided to make it a book to get it published. The publisher asked when the book was going to be done, and we said to them that “we were working on it” many times, and at the end finally the book got released. It wasn’t like we finally said the book’s done, but more that we had a lot to say that we wanted to get out there.
In terms of my own research, during that whole period of several years, I was losing impetus to do the kind of AI that I had been working on, partly because of the influence of Dreyfus: It was just not going to work. This was based partly on my own experience of seeing how much progress we were making on trying to get AI to work. I could write philosophical books and papers as I had tenure, but I couldn’t supervise graduate students who were doing philosophy. Still, tenured faculty get a lot of latitude to do whatever they please, even if their department thinks it’s stupid. So I needed to be able to think what I could do for students in terms of research that would be positive besides AI, what would be valuable about computer science in the engineering sense? I went back to that basic question, what do I care about? Is it AI as a philosophical endeavor or is it getting computers to do things for which people are still better? It was the pragmatic not the philosophical ontological concept of doing: That’s what called “human-computer interaction.
At the time the field was very young. Human-computer interaction really started in 1984. What happened is that up until 1984, a computer was something you dealt with only if you were a scientist or doing business data processing … or in the military. The ordinary person had no reason to ask a computer to do anything. The feeling was that if you wanted to interact with a computer, you needed to learn how the computer worked to interact with it: Computing was engineering, and there was a pride in being knowledgeable about computers. Then along came the computer for the rest of us, right? So in 1984 the Macintosh hit the market and all of a sudden there was a wide variety of people who wanted to use this device for something, but who had not the slightest idea of how a computer worked, much less how to put things in terms of computing. So we needed to figure out how people can actually use these computers. The first HCI conference really just gave this idea a kickstart. Not only did HCI create a new field but it made computing commercial. This meant there was money, which in turn meant you could get sponsored research, and so the whole field just took off from that point. I didn’t actually get into HCI until a few years after that moment—in 1990, I gave a talk at the HCI conference, and I just started the HCI program at Stanford. HCI said “Lets not worry about whether a computer is being intelligent, let’s worry about whether the computer doing things in a way we want it to do, in a way that people can understand so use a computer without getting confused by it. That was sort of the shift from AI to HCI. John Markoff’s book Machines of Loving Grace about AI versus IA (intelligence augmentation) highlights the MIT school of AI versus the school of intelligence augmentation.[5]
HH: If you have a moment before you go further into discussing design, in Understanding Computers and Cognition, you take Heidegger in a very different direction than Dreyfus’s classical work What Computers Still Can’t Do, going beyond applying Heidegger’s phenomenological critique to AI. In particular, your philosophical work with Flores created “the language-action perspective,” a new paradigm of how language and commitment can work with computing in a sense that was very far from AI at the time, and I believe the Coordinator program was created to embody these insights.
TW: Let me go into the history of Fernando Flores a little. He was a technical whiz kid, and while at the university in Santiago, he and some fellow students started a political movement and participated in student riots, so he got to know the people in politics in the Allende government, as written in Medina’s book Cybernetic Revolutionaries.[6] He ended up in prison after the coup of Pinochet against Allende. He was imprisoned on this island south of Chile. He was treated pretty well as he was not a political force. His power in the government was because the people who had power trusted his brain. Once they were gone, he had no audience. He was a technocrat, not a politician. So he was in prison, but he could have discussions with the other prisoners, could have visitors, and could read. And for some reason, the MIT-trainedChilean biologist, Humberto Maturana, came to visit him in prison and brought him things to read. So when we got together and started talking—he talked about all the things he had read— he just had that intellectual energy, right? This got woven into our joint thinking about AI and representation, in the same sense as the work on Dreyfus on Heidegger and representation.
My work on language-action is a bit strange. Fernando’s job before he got sent to prison was managing state enterprises and his view of management was that he couldn’t operate unless he could count on people’s commitments. Commitment became fundamental to how things work and how things could break down. So from the side of management, he had this basic intuition about commitment being the key to successful interpersonal, inter-human interaction. Fernando was also one of those people who was at the lunches with Dreyfus and Searle, and John Searle was on his Ph.D. committee. Searle talked about commitment in his basic ideas about speech acts, namely that a commitment was a promise, that is, if you were committed to fulfillment and so on. So Searle brought in a kind of formalization of the notion that commitment as reflected in speech and language and that really sunk in with Fernando. Yes we exist in language—to put it really simply. Existing in language doesn’t have to do with statements about the world but about performative acts we do with each other. From there Fernando developed this idea of a “loop,” a loop that goes around in a successful conversation. It is not hard to see the root of this is his failures and frustrations in managing in Chile. The loop is that someone commits by promising and someone else accepts the promise, and then there is a completion, and then a declaration of completion. Fernando’s thinking was centered on how to make that loop real. I worked with him on the Coordinator computer program, although I was never really part of the company as I was a consultant. As a stray bit of information, James Gosling, one of the grad student programmers who came for a summer, was the guy who eventually created Java.
I’ll be honest, I never really saw the language-action perspective as the key focus about intelligence. I understood why it was important, and Fernando made a whole company using these ideas, teaching people how to work with this perspective, and that was his whole intellectual focus. For me, language-action was interesting but a somewhat separate domain, whereas the anti-representational notions of phenomenology seemed like a coherent whole for critiquing the way people were thinking about intelligence and AI.
HH: The guiding concept of Allende’s project of Cybersyn, ran by Stafford Beer and Fernado Flores, was that if we had enough inputs to a computer we could improve economy and solve the socialist calculation problem. Yet as Cybersyn was destroyed by Pinochet, we do not know if Cybersyn would have worked at scale. Afterwards, Flores built the Coordinator, which didn’t take off but seemed very promising. Then you co-authored the first paper on the Pagerank algorithm with Larry Page and Sergey Brin, students at the time, which led to the creation of Google.[7] So systems like Google are fundamentally about searching and thus human computer interaction. What do you feel about the kind of philosophical foundations of these systems? Why were certain systems successful or not? Is there any philosophical resonance between Cybersyn, Coordinator, or even in the early days of Google?
TW: It is an interesting question to think of it that way. For Cybersyn, I only know what I read as Fernando never talked about it in any detail-specific way, although I found it quite interesting to read about. It basically was a premature—from a technological point of view—attempt to use quantitative economic modeling in a networked way. You can say Cybersyn is not Heideggerian by any means, but Cybersyn is certainly not wrong. If you believe that a model captures everything, then you are in trouble—that’s the Heideggerian part. If you believe the model can give you general information about what to do next, the model can, as shown by the whole basis of decision theory. On the political side, the question is whether Cybersyn would lead to centralization or the distribution of power. My worry is that systems always lead to centralization eventually. I don’t know.
Insofar as what is required of a user, the Coordinator required the user to be conscious and explicit about the speech acts that they were doing and to focus the structure on that aspect of language. If I write a message to somebody asking them to do something for me, I don’t write a message which says “Dear Joe, I would like you to commit to doing the following given these conditions.” Taking out the jargon, the message actually says “Joe, I really liked having beer with you last night, and I hope that your hangover isn’t too bad and you might be able to do something tomorrow.” When they interact with other people, people don’t want to be narrowed down to the philosophical bare bones, as there’s a whole set of interpersonal aspects about establishing connections and good feelings, and the Coordinator had none of that. If you used the Coordinator the way it was taught, the Coordinator went down to the bare bones of what needed to get done. I think that came from Fernando’s frustration at being part of organizations where people never got down to what needed to be done. Yet it was sort of alien to most users and a lot of the reviews called the Coordinator “fascist software” because it made you be explicit about what you were going to do. A lot of people said, “I don’t want to be explicit, I don’t want to tell my boss when I’m going to get this thing done, as I don’t want him to come to me and say why didn’t you get this done by then. I want to keep him at bay.” It was argued that the Coordinator was management-oriented software as opposed to user-oriented software as it just didn’t fit the way most people wanted to do things. Even if the underlying idea was correct, making commitments explicit which is what the Coordinator system did, and is not necessarily what people want to do. The failure of Coordinator doesn’t argue against the value of the analysis, but just that lack of resonance with the way people like to communicate with each other.
Google is a totally different domain. What is Google? The way Google emerged was straightforward, which is namely that people started search engines as information burgeoned on the Web and people wanted to find it. It was obvious. You wanted a directory to find information. At first, people hand-made a directory and they were pretty successful at that. Then search engines like AltaVista could find information automatically when you typed in keywords. Search engines had very little philosophical underpinning as it was like a dictionary for the Library of Congress for all the Web. The wrinkle that Google put in the mix was that you don’t measure the relevance by just looking internally at a set of documents, you measure the relevance by who else has linked to a document. There is a whole story about how that works, but that was the key change from previous search engines, a change that made Google much better at finding the things you want. That’s why Google got big and rich and so on.
However, there was a kind of philosophical commitment behind Google which I never bought into, which is, that Google organizes “all the world’s information”. The phrase “all the world’s information” raises all sorts of philosophical issues, starting with the word “information.” What is information? The other problem is the global sense that Google claims it can capture all information in some technical device. This is opposed to the fact that some of what people know is written out the Web, some of it is on YouTube, but a lot of what people know is their lived experience that can’t be put into a technical device, and that’s still part of the world’s knowledge. So Google takes a narrow view of what constitutes knowledge. Although from a practical point of view, Google has been very successful, from a philosophical point of view, I would that this view of knowledge is sort of shallow. Now, I have not talked to either Larry or Sergey for ten or twelve years, so I don’t know their current views. They probably think “So what? It works.” Do you really think that is all the world’s knowledge on Google, or is it all the world’s expression of knowledge which has been put into a visible form on the internet? We understand there are other kinds of knowledge that Google is not capturing, but this visible knowledge may be what people want. I have not had that discussion with Larry and Sergey, but it is because they are not driven by caring about the philosophy. If you told them what Google has claimed you don’t agree with philosophically, they could respond “Look how many users we have!” That’s where Google is.
HH: Are there any other projects that you have that you would be interested in discussing?
TW: Let me go back to design. So I was teaching new courses on human-computer interaction at Stanford, and there wasn’t a canon to teach because it was a new field. What I realized very quickly is that you could not teach human-computer interaction by lecturing in a classroom. You could show certain aspects on slides, but the way people actually learned human-computer interaction was through projects, by doing things. There’s this joke where you tie down a software engineer and gag him, and have him watch the user, and he continually wants to say to the user “Don’t do that!” So from the very beginning the courses were practice-oriented rather than lecture-oriented.
I got to know David Kelley who had founded the design firm IDEO with Bill Moggridge decades ago with the philosophy of user-centred design. In the traditional industrial design sense, you can make a mouse which has a beautiful case and so on. IDEO said “No, in design you don’t just want things to look nice, you really want to understand what the user cares about and how the design maps onto their caring.” This is not in computing but in physical objects. So Kelley was teaching project-centred courses in the mechanical engineering department on designing things for people. We got together to create a course on design which included both a physical and a computational aspect.[8] We insisted that every team have at least one person who knew computing, one person who knew mechanical design, and one person who knew the social, from the humanities, linguistics, anthropology—their focus was people, not devices. So we pushed interdisciplinary teams to do projects—some which came out very interesting, and it was a learning experience for them and for us. At that point—to simplify this whole history—Kelley always had a lot of trouble in the mechanical engineering department. In the same way that I always had trouble, although I was less of a trouble maker, in computer science, although some people said what I was doing was not “real” computer science. Kelley gave a talk in mechanical engineering about the iterative approach where you try design prototype and you test it and you change it and so on, and one of the faculty got up and said “In my field we think before we do things.” The implication being this whole iterative cycle was the result of a failure to think.
Kelley asked me to help him create something that was not part of these traditional departments, and he then managed to convince Hasso Plattner, who was one of the founders of SAP, to show up with a bunch of money. At Stanford, as most institutions, if you get a bunch of money you can sort of do what you want. So Kelley created this thing called the “d.school,” although the Provost said we couldn’t call it a school, as that requires a 150 million dollar donation to call it a school. The d.school was never officially a school, even though everyone called it a school—it is the Hasso Plattner Institute for Design. Having that chunk of money gave Kelley the freedom to create, and I was part of the interdisciplinary faculty from various departments that got involved. We thought about courses that could serve broader audiences—in the courses I had been teaching the focus had been computers—whereas in the courses we did at the d.school, students designed social systems which don’t use computation. So that’s how design found its way into my thinking and activities.
HH: Maybe we could step back a little bit—design work is absolutely fascinating but I have a question going back to Heidegger. Towards the end of his life Heidegger became very concerned that somehow the world itself was being slowly enclosed, enframed in a Gestell, a kind of digital and cybernetic apparatus. What is your analysis of it?
TW: One of the readings I used in courses at times was “The Question Concerning Technology” by Heidegger, which didn’t talk about cybernetics directly but was about a dam on the river and so on. Technology influences how we experience the world in a foundational manner. I’m not sure what it means to be digital. If I’m watching TikTok, is that a digital experience or a digitally mediated experience? Most of what people worry about is the danger that digitally mediated experiences with other people have a different nature than experiences that are mediated face-to-face, especially when the mediation gets into things like choosing what you’re going to see. Am I having a digital experience on Zoom? Not really. It is mediated with a camera, but in some sense we are facing each other and talking and experiencing each other as people. I don’t think there is a generic notion of the world becoming digital as much as we are allowing mediation through the digital, and that mediation includes control, which of course is the whole point of the critique of the large companies like Facebook. There are dangers, but I don’t think from a philosophical point of view the world is digital.
HH: Did you ever go beyond Division 1 of Heidegger's Being and Time? You said you looked at the “Question Concerning Technology,” do you feel like there is any value in late Heidegger?
TW: The answer is I think not. Of course, late Heidegger falls under the shadow of Nazism. I don’t really see anything in Heidegger beyond what Dreyfus said about Division One of Being and Time. But I’m sure there’s work that’s relevant if interpreted in the right way.
HH: It’s quite tragic we never got the take of Dreyfus on the rest of Heidegger’s later work. Of course, there is a profound critique of technology from philosophical circles. Yet I much prefer the world with a search engine than without. I prefer the fact that I can talk to you over Zoom, rather than having to fly to California. This brings us to the question: Does technology have emancipatory potential?
You were involved in what is called “Program on liberation technologies at Stanford” for a long time, a movement that prefigured the interest in development, democracy, and even anonymity on the Internet with a focus on the Global South as exemplified by the Arab Spring. Where does the Program on liberation technologies at Stanford fit in to your trajectory as a computer scientist?
TW: Joshua Cohen, who was a professor of philosophy and political science at Stanford, and I started a course in the d.school where we asked students to develop projects in Kenya, in Nairobi to be specific. In order to get the students out of the assumptions they were making about Kenyan users, the students working with local organizations and a university in Kenya, looking at the lives of people living in a slum of Nairobi. Our students actually went over to Kenya and, after their return, one of them started a weekly speaker series for people at this intersection of technology and social science, although without much philosophy. We needed a name for the entity which sponsored this speaker series, as we didn’t know what to call it, and so we called this entity “Program on liberation technologies at Stanford.” Of course, the name “Program on liberation technologies at Stanford” is taken from “Liberation theology.” In the same sense that you can be working for God and also for liberation, you can be doing tech but also for liberation—in a very vague sense of the term liberation, perhaps not in the Marxist sense.
So we started the Program on liberation technologies at Stanford speaker series, and we had a grad student who was very interested in it and was actually helping to organize it. He said let’s start an email discussion list, so he created this email discussion list that took on the name Liberation Technologies without any more of a precise notion of what “Program on liberation technologies at Stanford” meant other than the lecture series. This discussion group took on a life of its own. Some went off in a different directions as more and more people became interested in privacy and tech in the militarized world. The people who were conversing about this set of issues didn’t in particular connect to what we, who started it, were spending our time thinking about. But if you get a good discussion going, then let it go. The Program on liberation technologies at Stanford went on for many years. Joshua Cohen ended up leaving Stanford and went to Apple University, their internal teaching group, to try to get social thinking into Apple. At some point, the speaker series ended, but because things on the internet never die, the online discussion list just kept chugging along, sort of like evolution. The Program on liberation technologies at Stanford, the mailing list, is very different from what it was, but it’s still a descendant of our work.
HH: To return to the topic of AI, we now have AI systems that aren’t clearly representational, as exemplified by large language models that are based on vectors. Philosophers like David Chalmers have argued that its possible that these large language models are genuinely intelligent—in whatever manner that means— as they can act behaviorally or functionally the same way as we do in some limited senses. Yet it does seem clear that the LLMs do not have much of a sensory apparatus. Still, could on a higher-level do you think it would have a world? Do you have any thoughts on whether ChatGPT has a being-in-the-world? Could a large language model have its own umwelt?
TW: My reaction is to think whether or not I have any notion of the distinction that is being made here. If the distinction is whether or not existing generative AI has a sensory apparatus, that is actually an interesting question because it has a huge visual sensory apparatus, connected to all the cameras in the world. As for an auditory apparatus, it has plenty of microphones. Now it doesn’t have touch, and you might say that human experience and the world we live in consists of the things we can put our hands on. If that is the definition of world then clearly ChatGPT doesn’t have it. Yet what if it did? What if someone built robots with touch sensors on their hands and they connected those robots to ChatGPT so it could go around feeling things, would that make a difference philosophically? I’m not sure that somehow this would make a difference. It’s not our world in the same way that the world of a dolphin isn’t our world either. You could argue, in fact, that my world isn’t your world. So I think I don’t believe ChatGPT has a world in that sense.
The problem with computers is they don’t give a damn, as said by John Haugeland. What does ChatGPT care about? The answer is: Its a meaningless question. ChatGPT doesn’t care. Care is not part of its functioning. ChatGPT does what it does and it doesn’t care about anything. But if you say what do I care about, then that is a great psychological question and discussion. But people do care about something and in that sense you say the world is care, as pushed by Fernado Flores, what makes our world is that we care. If you say that ChatGPT doesn’t have a world, it doesn’t care about anything.
HH: In his book The Promise of Artificial Intelligence, Brian Cantwell Smith posits that though its theoretically possible that an AI like ChatGPT could care, it definitely doesn’t now as its missing the social world and moral commitment.[9] Can you get genuine care from a machine?
TW: When I was doing my PhD at MIT, Brian Cantwell Smith was an undergraduate working with me, before he ended up branching off into philosophy. What if somebody programs an overarching structure to the LLM, which has a representation of what the LLM cares about? Of course now we’re back to representation: The LLM doesn’t care, it just has a representation of care. That’s the same question that has been bouncing around for decades in AI.
HH: I heard you were working Dreyfus and B. Scot Rousse on some new philosophical writings. What are you working on now?
TW: I have a disappointing answer to that question: What am I working on and where is it going? I am going to talk about politics, so I’m not doing work in AI or philosophy. My focus really is—as with many retired people—my legacy to the world in the sense of direct political action. What has recently happened is that we are going to have a dictatorship in the United States. The question is: How do you fight against that? The other aspect which we’re very involved in, my wife in particular, is both feeling a connection to Israel and feeling horror about what’s going on in Israel, and working to sort of support people who are trying to bring a more peaceful direction to the conflict. As for my email on a daily basis, there are a few emails about AI, where I’m more a curious observer, but there’s a lot of emails about politics. So I’m not keeping up with the philosophical literature. For me, the questions I’m focused more on are how does AI affect our lives and interactions with other human beings: What does it mean to have our interactions mediated in the ways they are currently being mediated? For example, how does that mediation affect trust, which is a huge problem. Look at all these polls where large percentages of minorities believe crazy theories about the other side. Everybody believes in their side and that there is an opposition, as opposed to being more realistic about things and realizing there is a spectrum. This is being driven to a large extent by computer-mediated interactions. Again, what can we do about that? How can we salvage or return the sense of community and interaction with other people which gets lost when you see them as a Twitter feed? I would say the philosophical questions that drive me are more about human social structure—and the way it’s changing—than about cognition.
References:
[1] Terry Winograd and Fernando Flores, Understanding Computers and Cognition: A New Foundation for Design (Norwood, NJ: Ablex Publishing Company, 1986).
[2] Hubert Dreyfus, What Computers Can't Do (Cambridge, MA: MIT Press, 1972).
[3] Daniel Bobrow and Terry Winograd, “An Overview of KRL, a Knowledge Representation Language” Cognitive Science 1, no. 1 (January 1977): 3–46.
[4] Terry Winograd, “Understanding Natural Language” Cognitive Psychology 3, no 1. (January 1972): 1–191.
[5] John Markoff, Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots (New York: HarperCollins, 2015).
[6] Eden Media, Cybernetic Revolutionaries: Technology and Politics in Allende’s Chile (Cambridge, MA: MIT Press, 2011).
[7] Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd “The PageRank Citation Ranking: Bringing Order to the Web.” Paper presented at 1999 Proceedings of the World Wid Web Conference, Toronto, Canada 11–14 May 1999.
[8] Terry Winograd, “From Computing Machinery to Interaction Design,” in Beyond Calculation: The Next Fifty Years of Computing, eds. Peter Denning and Robert Metcalfe (New York: Springer-Verlag 1997), 149–162.
[9] Brian Cantwell Smith, The Promise of Artificial Intelligence: Reckoning and Judgment (Cambridge, MA: MIT Press, 2019).