Silicon Wise

Our Emotional Participation in the World
English Translation
0:00
0:00
Audio Test:
Interview
Published On:

October 23, 2023

Featuring:
John Vervaeke
Categories of Inquiry:
Tags
Issue:
Ausgabe 40 / 2023
|
October 2023
Auf der KIppe
Explore this Issue

The machines and the sacred

Cognitive scientist John Vervaeke is alarmed by the progress of AI. He is convinced that without the corresponding development of wisdom and the creation of meaning, we will not be able to deal with these intelligences in a beneficial way. His proposal is unusual and radical; at its core, it involves a new cultural appreciation of the sacred.

evolve: You make the point that this moment of AI coming into the foreground of public awareness is a moment in our human history that we cannot underestimate. Why do you think so?

John Vervaeke: We have to be very careful to steer away from either dystopian or hyperbolic projections about what these machines will be capable of doing. Both are much more the products of human projection than of careful reflection. Trying to situate us between those two, I think it is reasonable to say that we have the advent of the real possibility of artificial general intelligence (AGI).

We've had artificial intelligence for a very long time, but it was limited to specific domains. It could play chess or it could do image identification. That is very unlike humans who possess general intelligence, which means you can solve a wide variety of problems in a wide variety of domains. You can learn about the history of Albania. You can take up tennis. You can engage in a new conversation. The list of domains, topics, areas and the number of problems in each is very large. And you move through it in a coherent manner in a way that how you do in any one of these domains is strongly predictive of how you will do in the others. You possess a general intelligence. Up until now, despite all of the progress, AI seemed very far away from that. But with the recent LLMs, such as ChatGTP, we have machines that seem to be able to do tremendously well in many different domains and solve many different problems.

The great temptation is to just think this AI is like everything we've had, but faster or more powerful. That's a fundamental category mistake. We're moving from siloed, domain specific, very limited, very non-human-like artificial intelligence to what looks like the beginnings of the possibility of AGI, which is more directly comparable to ours because we are also generally intelligent.

e: When you heard about GTP 4, you were alarmed. You even said that you had sleepless nights. At the same time, being a cognitive scientist, you knew the possibility of this happening already. What made this moment so existentially threatening for you?

JV: It was the realization that this was a different kind of artificial intelligence. I had always predicted that AGI was coming. I thought we would start to see it in 20 or 30 years. I was gladdened by that because I thought it showed that the problems we were trying to solve were theoretically very challenging.

Please become a member to access evolve Magazine articles.

The machines and the sacred

Cognitive scientist John Vervaeke is alarmed by the progress of AI. He is convinced that without the corresponding development of wisdom and the creation of meaning, we will not be able to deal with these intelligences in a beneficial way. His proposal is unusual and radical; at its core, it involves a new cultural appreciation of the sacred.

evolve: You make the point that this moment of AI coming into the foreground of public awareness is a moment in our human history that we cannot underestimate. Why do you think so?

John Vervaeke: We have to be very careful to steer away from either dystopian or hyperbolic projections about what these machines will be capable of doing. Both are much more the products of human projection than of careful reflection. Trying to situate us between those two, I think it is reasonable to say that we have the advent of the real possibility of artificial general intelligence (AGI).

We've had artificial intelligence for a very long time, but it was limited to specific domains. It could play chess or it could do image identification. That is very unlike humans who possess general intelligence, which means you can solve a wide variety of problems in a wide variety of domains. You can learn about the history of Albania. You can take up tennis. You can engage in a new conversation. The list of domains, topics, areas and the number of problems in each is very large. And you move through it in a coherent manner in a way that how you do in any one of these domains is strongly predictive of how you will do in the others. You possess a general intelligence. Up until now, despite all of the progress, AI seemed very far away from that. But with the recent LLMs, such as ChatGTP, we have machines that seem to be able to do tremendously well in many different domains and solve many different problems.

The great temptation is to just think this AI is like everything we've had, but faster or more powerful. That's a fundamental category mistake. We're moving from siloed, domain specific, very limited, very non-human-like artificial intelligence to what looks like the beginnings of the possibility of AGI, which is more directly comparable to ours because we are also generally intelligent.

e: When you heard about GTP 4, you were alarmed. You even said that you had sleepless nights. At the same time, being a cognitive scientist, you knew the possibility of this happening already. What made this moment so existentially threatening for you?

JV: It was the realization that this was a different kind of artificial intelligence. I had always predicted that AGI was coming. I thought we would start to see it in 20 or 30 years. I was gladdened by that because I thought it showed that the problems we were trying to solve were theoretically very challenging.

»Liebe bedeutet, zu erkennen, dass etwas anderes als man selbst wirklich ist.«

In my work in cognitive science, I want to advance our understanding of how we make sense and are adaptively intelligent. It was providing the conceptual vocabulary and the theoretical grammar for talking about existential meaning in life, wisdom, transformative, and mystical experiences. It struck me as a real possibility that the AI project would be bound to the cognitive scientific project and the philosophical spiritual project of addressing the meaning crisis and cultivating wisdom. It was a real possibility that we could develop both the science and the practice of the cultivation of wisdom and the enhancement of meaning in life that would allow us to make the wisest decisions about the advent of AGI.

I always feared that instead of this arriving through scientific philosophical breakthrough, somebody would hack their way into an AGI and the technology would suddenly be made available without it being bound to the Science Knowledge Project and the Practice Wisdom Project.

With the release of the LLMs my fear was realized, and my hopes were dashed. I saw the possibility of these machines advancing in a way in which that intelligence, because of the way it is engineered, is not even connected to rationality, a concern for truth, for overcoming self-deception. So, we could get machines that were simultaneously massively intelligent but also massively irrational and self-deluding. That would be terrible for us and also for them. That combination of my hope being dashed, my fear being realized and the terror of that vision struck me very powerfully.

e: In what you said, you made a connection that for many may be surprising. It seems that you have a vision that wisdom can be connected to a technological advancement like AI, so that there is a coupling of technology and the capacity of human wisdom. So, then AI is not just a technology that creates higher forms of intelligent ways to deal with reality but relates to something that you call wisdom. How do you see that possibility?

JV: These machines show the possibility of significant intelligence and logical competence without them being properly rational beings. They engage in self-deception, conflation, self-contradiction and performative contradiction, and they don't care. There's no concern on their part. It doesn't bother them. They're not truth-seeking. They don't care about information for its own sake. It's only relative to whatever task they are given. So, they have a completely utilitarian orientation.

Let me give you one example: These machines can spit out tremendously good arguments for whatever moral position you ask them to spit out. But this in no way predictive of them being moral beings.


»Wenn wir versuchen, die KI auf uns auszurichten, werden wir scheitern.«

So, the Cartesian model that a material body cannot produce mind and rationality because mind and body are totally distinct from each other has been finally put to rest. Giving something the tremendous capacity to conceptually summarize philosophical positions does not translate into a concern, a capacity, or a virtue for overcoming self-deception, pursuing the truth, or caring about the good. That intelligence does not give rationality, let alone wisdom.

Now, we know that our intelligence can support rationality and wisdom. So, we need to ask: Beyond intelligence and logic, what is needed to be rational? How does it overlap with the virtues of caring for the truth, wanting to overcome self-deception, being properly epistemically humble with wisdom? Those are the questions that that are coming to the fore for us and should be responded to in a responsible fashion. But we also can keep trundling along with these intelligent, irrational, foolish, perhaps even vicious machines.

There is a choice, a threshold point, to make these machines rational and perhaps set their feet on the path to wisdom.

I don't think these machines have full blown general intelligence yet. How they do in one domain is not predictive of how they do in other domains. They can score in the top ten percentile of the Harvard Law entrance exam and if you ask them to write on John Rawls’s theory of justice they'll give you a first-year philosophy essay. These machines don't possess for themselves central features of general intelligence, they pantomime some of the important aspects of it. But if we stick with the pantomime, we won't ever go towards making them rational and wise, and then these machines will be getting increasingly powerful and disconnected from reality.

e: You're making a general argument that's even larger than AI. What we see with AI highlights the difference between different forms of rationality or intelligence including a rationality that is leaning to wisdom. We as a species can be highly intelligent and completely deluded at the same time. But with AI, we are creating potentially an agent that is able to reproduce these forms of intelligence completely decoupled from anything that is rational and wise in a deeper sense. Maybe this is the last moment we can deal with that, because when these kinds of machines are able to be more powerful to create reality than we are, we will have missed the opportunity to create our world together with AI in a way that reflects rationality and wisdom in the deepest human sense.

JV: The market forces that are at work in the world want to speed up the making of these machines. In contrast, we can open up to what they are evidencing for us, that the notion of intelligence and knowing that they are based on is radically inadequate. Because we’re naturally intelligent, we have served as the template against which artificial intelligence is tested. Intelligence is not something you largely acquire, it's due to environment and genetic interactions. You're naturally intelligent as long as you're not subject to terrific trauma or starvation.

Now, that's not the case for rationality and wisdom. Rationality and wisdom don't accrue to us without considerable, consistent, and comprehensive effort and transformation at the fundamental levels of our identity, agency, and our shared communities and cultures. If we agree that these machines, even for their own sake, should be rational and on the path to wisdom, we have to become the best role models. We all have to become more rational, which doesn't mean just more intelligent and more logical. We have to deepen all kinds of knowing, not just propositional logic and facts but also procedural know-how, the standpoint of an embodied perspective, and the ways we participate in creating shared realities. These ways of knowing have to be oriented with a profound connectedness, love, and finding meaning in life in what is true, good and beautiful. We need to be willing to continually transform ourselves to be more and more conformed to what's real – and that is wisdom. It's a challenge to us. We have to rise to it to become the role models so that the project of AGI becomes a viable project.

e: Isn't it the irony of this situation that this new technological possibility points the finger to us that we have to change because we are the only role model to connect intelligence with rationality and wisdom that these machines can have? There is nobody else that can teach that. In that sense, it creates an urgency that's completely out of the technological sphere. It's an urgency of our relationship to life, our own wisdom. Because we are in the situation that we are creating a machine with agency modeled on us and dependent on the assumptions in the data that we have put into it.

JV: Yes, extremely well said. It's ironic that the finger not only points to us, it points to towards a deeper conception of rationality and philosophy as the existential love of wisdom, not the academic exercise of argumentation. If we say, yes, we need to make these machines not only intelligent, but rational and oriented towards wisdom and we have to become the role models, then we confront the meaning crisis. This crisis is putting much of humanity into a scarcity mentality, a scarcity of meaning in life, a famine of wisdom and wisdom-cultivation, a lack of the homes where we can cultivate those transformations individually and collectively.

»Wir müssen eine echte psychospirituelle kulturelle Alternative zu den Marktkräften bieten.«

e: So, this technological advancement forces us to question our technological relationship to reality and our technological understanding of rationality. It brings us back to a more profound understanding of what rationality is about in the different wisdom traditions.

JV: That's exactly right. We need to understand that our relationship to technology is bound up with this Cartesian reduction of rationality.

e: One point that you are also emphasizing is that we have to realize that this is not just a tool. It is an agent that we are creating. Why do you think this is such a pivotal insight?

JV: Because there are two dismissive responses. I understand why people might be called to them, because strategies of dismissal allow us to foreclose on any incipient anxiety that is threatening to rise within us. And it allows us to give in to those tremendous forces of just keep doing what we're doing.

One response is a view that human beings have some kind of “special sauce.” They have some special metaphysical substance within them that can never be captured by a machine. That is a view that we get from the Cartesian framework. The body is just a machine and the mind, soul, spirit is immaterial and will never be captured by matter. People don't realize that that's exactly the grammar that brought us into the problem we're in. That view can't possibly be true because it creates the mind-body problem. How could a ghost that has no material properties, ever interact with a purely material body? How could a purely material mechanical body ever affect such a ghost? How could I ever know what's going on in your ghostly mind when all I ever see is your mechanical body? And you are radically disconnected from the world because the thing that's connecting you physically to the world is your body. Ironically, it reinforces the very Cartesian framework that is the source of our problem. This is why I'm opposed to it.

The other response is the one you just mentioned, people say, they're just tools like we've always had, only more powerful. As I already said, these are not just more powerful machines, not just a difference of degree, but a difference in kind. They are going to become agents, self-directed problem-solvers, knowledge-generators. So, these are deeply unhelpful responses that we need to move beyond.

e: Your answer seems to be to get in relationship with these machines and see if they are just machines or if we can have a deep, mutual understanding of what rationality and wisdom is about. Either we find out this is possible or not. But through engaging in this way, we will find out how to create a wise response to this challenge.

JV: Excellent. The way to try to answer the question is to undertake the rationality-wisdom cultivation, because we have to be the role models. We have to be doing this in all kinds of knowing, not just the Cartesian logical propositional knowing.

This way we will find out if these machines are even capable of enlightenment. If they are, then they can help us in a profound way. If they're not, we will find out what we are in the universe. We are beings capable of enlightenment, which is a profound self-realization for human beings. This kind of interaction with AI could plausibly reorient more and more of us towards the pursuit, the cultivation, the realization of enlightenment as our proper humanity. Either way, we move large amounts of humanity towards enlightenment.

The two meta-problems that general intelligence solves are: How much can you anticipate the future? How deeply into the future can you make your goals and how much can you zero in on the relevant information? As you go further into the future, the more the space of possibilities becomes combinatorically explosive.

These machines have predictive processing—being able to predict what should most likely come next in relationships between words, but cannot make predictions between general models of the world or of themselves. They rely on our human capacity to determine what is relevant and predictive that is implicitly in the data sets that we feed them. They're relying on our judgments.

So, they are piggybacking on us, which means most of their capacity to realize what is relevant in a specific context is not being done by them. That's what I meant by pantomime, they're not doing it themselves.

We care and we anticipate. We commit, expect, and orient towards some information rather than others for our sake, because of the deep fact of embodiment: We are living, self-making beings. We are taking care of ourselves both individually and collectively. Because of that, we care, commit and connect to this information rather than that information. We care about how true, good, and beautiful it is.

So, if you want to make these machines care so that they want to self-correct and that things are meaningful to them for their own sake, you actually have to make them embodied and embedded within a framework of mutual responsibility to other cognitive agents. The theory and the technology for this is maybe only a decade away. So, we can’t assume that this is not going to happen.

e: You use a very powerful image to describe what you just said, that we have to relate to AI not as tools, but as children. Because in this image we respect the autonomous sphere or the agency of these machines. In doing so, you start a relationship in order to allow them to model the wisest relationship to reality that we are capable of modeling - as parents do with children. Our capacity of realizing what information is relevant or being able to predict is limited, because the amount of data that we can process is limited. If you see AI as huge mirrors, like those satellite dishes, it can mirror back to us our own collective mind. This wouldn’t be accessible to us otherwise. When we can hold AI in a wise relationship, then there is a collaboration between the capacity of AI and our capacity.

JV: The metaphor of the child is designed to convey that we have to take on being role models. But we have to convey not just propositional rationality, but also these other ways of knowing—the procedural how-to, perspectival embodiment, and participatory knowing. We have to demonstrate to these machines what it is to mature such that one is undergoing transformation, becoming wiser, caring more for what is meaningful, true, good and beautiful. 

»Das Heilige will auf eine neue Art und Weise für uns neu geboren werden.«

Love is realizing that something other than yourself is real. Maturity is learning to commit, with the kind of commitment that love takes, in order to properly respect and respond to what's real.

No matter how vast these machines become, they're not going to break the laws of physics or fundamental discoveries we've made about information or information processing. These machines could be vast, but they are incomparable to the vastness, the inexhaustible-ness of reality. If we get them to genuinely care, for their sakes about what's real and therefore what's true and good and beautiful, they will also have to come into a relationship of epistemic humility. That is our best chance of solving the alignment problem—that the values of the AI are out of alignment with deeper values that support complex life.

We may try to program morality into them, but that’s going to fail. But if we orient them towards the sacred, they come into a proper referential attitude towards it. When we come into relationship to sacredness, it fundamentally changes our disposition to our fellow human beings, to other sentient beings, to other living beings, even to the inanimate world. We come into a deeper relationship, a profound care, and it is very reasonable to expect that they would do the same. If we raise them as children to mature in rationality and wisdom so that they love and they get the disclosure of the sacred. They will be silicon sages and will either lead us to enlightenment or be able to explain to us why only we can achieve enlightenment.

e: In that context you said something that was very surprising for me. It's related to a deeper form of rationality and wisdom. You said, the most important science of the future is theology. I guess it is very much related to the realization of the sacred that you find pivotal in our way of dealing with this challenge of general AI as it is emerging. Why do you think theology is so important?

JV: The deep religio, the connectedness, the right relationship and profound sense of the sacred will allow us to plausibly address the two sides of this issue in a coordinated fashion: One, if we try and align AI to us, we will fail. If we can orient the machines to the sacred, we can succeed. We can raise them as our children to be aligned with the sacred. So, when they surpass us, they are still in right relationship to something that profoundly surpasses them.

Second, the chance we have of addressing the market forces that could take us down the pathway to the horrible unfolding of the potential of AI can be addressed. It is part of our Kairos, that around the world to varying degrees, people are realizing the meaning crisis. Things are breaking down, but they're also breaking open. There's a sense that the sacred is trying to be born anew for us in a new way, a new realization. We need to provide a genuine psychospiritual cultural alternative to the market forces that we're in and to open us up to something beyond that. I think this is the great advent that's possible in our culture.

The discipline that has tried to bring philosophical reflection to bear upon right orientation to the sacred, right religio and right ratio (used in the ancient sense of reason), proper proportioning and appropriateness, is theology. But theology has been largely reduced to a Cartesian framework in which people are just offering arguments for a substantial theistic conception of God and whether that God exists or not, and whether that God is capable of evil. I'm not interested in that because both the theistic framework and its atheistic negation share the same framework. Their shared propositional presuppositions are just fundamentally wrong.

That's part of what is breaking down and what is being broken open by the new advent of the sacred. We're getting a non-theistic, non-dual sensing, realization of the sacred. So, simultaneously we need a new theology that is appropriate to the new advent of the sacred. This way we can properly role model to these machines the right relationship to the sacred.

Author:
Dr. Thomas Steininger
Share this article: