The Third Attractor

Our Emotional Participation in the World
English Translation
0:00
0:00
Audio Test:
Interview
Published On:

October 23, 2023

Featuring:
Daniel Schmachtenberger
Categories of Inquiry:
Tags
No items found.
Issue:
Ausgabe 40 / 2023
|
October 2023
Auf der KIppe
Explore this Issue

Please become a member to access evolve Magazine articles.

Beyondself-destruction and external control

Daniel Schmachtenberger has been working intensively on the meta-crisis in which we currently find ourselves. He explores the complex interactions of ecological, social, political and spiritual crises, which are being exacerbated by the rapid development of artificial intelligence. Despite the increasing risk of catastrophic events, he also sees hope.

e: What was your response when ChatGPT was released?

Daniel Schmachtenberger: I recognized the large scale public deployment of a technology that people would quickly use for lots of things, become dependent on and built businesses on. But this deployment happened without the safeties that would be necessary. The risks were now in the wild and very hard to reverse. It decentralized the capability rapidly, both because of leaks and the reverse engineering that others can do to create open-source models.

OpenAI did a paper about the scaling laws of this technology which showed how much more powerful this could get with simply increasing data and parameters. And it is even more powerful with multimodality, the ability to move not just text to text, but text to images, video and music.

I thought about the eminent deepfake potentials, but also the ability to functionally have the equivalent of a huge research team of people that knew chemistry, physics, engineering and biology that you could ask questions in natural language and get answers for all kinds of purposes, which includes positive and destructive purposes.

But my greatest concern was and is that AI radically decentralizes catastrophic power. As a result, it increases catastrophe scenarios. It lowers the barrier to entry of catastrophic capability. It also increases market growth and environmental destruction associated with market growth rather than resulting in technological efficiencies that will save the environment. More likely it increases acceleration towards passing planetary boundaries that ensure that we have an atmosphere.

So, it affects catastrophes and creates authoritarian dystopias. This technology can empower “bad actors” to do things that could lead to violent, terribly destructive outcomes. It can also empower “good actors” to increase efficiencies in their business by supply chain optimization and energy utilization that allows them to grow their businesses. The net result is more extraction and more pollution in the environment. But it also radicalizes the dystopias. How oppressive a top-down control system can be is limited by the ability to process information about what everybody is doing. If you can have an Internet of things that has sensors and monitors on what everyone's doing everywhere and AI is able to process all of that, you can create a centralized governance system. Large Language Models like Chat GTP are a technology that enhances catastrophe and dystopia simultaneously.

e: I'm interested in the catastrophe that is looming in relation to the human spirit. What does this mean for us as organic beings?

DS: It's a powerful technology. It would be silly to not acknowledge the lots of things it can do that we would all like to be able to use. We're interested in advancing immune-oncology for childhood cancer, being able to use tech that could decrease pollution or use fusion energy. There are also surprising positive effects of the chat functions. Because of the one child policy in China, there are tens of millions of excess men relative to the women. These men are unmarriageable. That's a terrible condition, which usual leads to violence in societies. But right now in China, a huge number of them are entertained because of a female chat bot. In some ways it's better than total loneliness and violence.

»There is a naive progress narrative that things are getting better and better, because of tech, capitalism, the philosophy of science and democracy – all the things that modernity brought.«

There is also the exploration of chat bot functions as company for the elderly. They have someone to talk to who talks to them and knows their name, as opposed to them just being alone in front of the TV. But it is also the most disgusting thing that the answer is not reuniting older people with young people as the center of culture. It is making it easier to have nobody to pay attention to elderly people because we can just make a robot do it. We can use the same application for kids.

It is similar when we think about the applications in education: Is it an unbelievable educational opportunity to have a tutor bot that knows everything and can pedagogically regulate itself to the student and teach in a way the student wants to learn? It's amazing. But it doesn't love the student, it doesn't care about them. It can have no mirror neuron experiences. How significant is that to the educational process?

I think there are human spirit applications of the tech that are interesting. But the way the market and government roll this forward more likely moves in the way that most tech has, which in general is more spirit impoverishing. This is a technology that portends our extinction as a species biologically, not just spiritually.

e: Can you explain the biological risk?

DS: The major labs that are making publicly available versions of LLMs (Large Language Models) have legal controls on them that restrict them from answering a question like “How to make anthrax?” But the jailbroke versions and the open source versions don't have those. US national laboratories and open-source researchers have open source LLMs answer questions like “What's the most number of people I can kill with X number of dollars using biological weapons? What would be a step-by-step process to synthesize special genomes? Where would I get gene printers?” It can answer all kinds of other questions about vulnerable infrastructure targets that drones can attack. With this kind of information, AI is a nuclear level catastrophe weapon being radically decentralized.

In a situation where the Stockholm Resilience Centre just showed that we've crossed six of the nine planetary boundaries radically, the benefits to business due to increases in efficiency don't mean you become environmentally sustainable. In contrast, you use more because the increase in efficiency means less material or energy costs which means a cheaper input to some market areas that weren't profitable before. More total exploit occurs unless you have a corresponding law that binds these applications, which we don't have. Those same technologies can be used by state actors to make better hypersonic missiles, better AI weapon systems, which we have incentive to do. Also, there is the movement towards autonomous AI that can act as its own agent. All of these lead to possibly existential scenarios.

e: With technology there has been the hope that its advance will also lead to better life for more and more people. Do you see that narrative shattered with this kind of new technology and its risks?

DS: There is a naive progress narrative that things are getting better and better, because of tech, capitalism, the philosophy of science and democracy – all the things that modernity brought. But that narrative is the result of history being written by the winners that have the resources to create and sustain that narrative. Rarely does someone win and say we're actually the bad guys. We were more violent. They were lovelier people, we just had better guns.

The idea of technological progress and that everything is getting better, leaves out the 100 million Native Americans that we genocided. They don't have the same story of the progress narrative. The tens of billions of animals in factory farms as well as the species going extinct from human action every day are not in line with that progress narrative.

»But my greatest concern was and is that AIradically decentralizes catastrophic power.«

With our technology we solve narrow problems for us that increase our immediate gain in power while externalizing harms to lots of other places, people, species, spaces, and times. We do this with a narrow intelligence that looks at a part, separate from the whole and says, “I'm going to make more of this part.” All the technologies that humans have created used this intelligence that separates out certain high relevance parts from the whole and then figures out the causal relationships to make some technique out of them. All the technology that humans have ever made, like splitting atoms and editing genomes, comes from that intelligence. AI is us making that intelligence itself outside of us exponentially better. It implies that this kind of intelligence is realizing itself. In this way it is different in kind than all the other technologies.

e: This form of intelligence also disconnected mind and body. Now we have externalized mind with the Large Language Models and no body. So, what is the human in that context? AI developing agency on its own seems to be the apogee of the modern mind.

DS: It is that. If we believe the naive progress narrative there is the default assumption that AI is net good because it is progress.

But when we question the naive progress narrative we see what has gotten better are narrow measures of quality of life for a small number of beings in a very short temporal horizon. Lots of beings were harmed that entire time and the future of all beings is being risked by the process. We are maybe going to venusify this beautiful blue marble of a planet and make it inhospitable to all complex life, so that we could have carriages cars without horses. For these horseless carriages, we made the internal combustion engine for which oil was useful for about 125 years. As the side effect of pursuing that, we would venusify the planet and we can't stop, because we're so invested in using them. Similarly, by the time that we've used AI enough that its harmful effects are visible, we also won't be able to stop. But the speed of its effect and scale dwarf everything that has happened before combined.

e: That comparison is powerful. It's been such a short time that fossil fuels have been used and the destruction is extraordinary. But we're now talking about years.

DS: Yes, ChatGPT got to 100 million users in five weeks. The internal combustion engine took a long time take to get to 100 million. I can do a lot with an internal combustion engine, I can build trains, tractors, boats, but nothing like all the things you can do with AI, which is literally everything that human intelligence can do.

With this power I see two possible catastrophic attractors and am looking for a third attractor that goes beyond these catastrophic risks. By the first attractor, I mean the catastrophes that come from biodiversity loss, species extinction, pollution, climate change and the human migration this causes, the resource wars and the breakdowns of supply chains. The second attractor is a response to that global catastrophic risk. Some want to give the government more centralized control, without good checks and balances on power, usually in public-private partnerships with the corporations that have profit associated with these activities. The way to avoid bad things happening is to be able to control them, which leads to increasing control dystopias. Right now, the probability of catastrophes at a much larger scale is radically increasing, and the probability of such controlling dystopias is simultaneously increasing. And we want a future that is neither of those two things.

What could be a third attractor? It would be a human civilization with a  technosphere and a social sphere that is compatible with the biosphere. It does require us being safe stewards of the power of our technology so that we don't destroy the planet or each other, but use all of our power for life. But we never did that so far in history. So, it means having a human presence that is unprecedented. There have been pockets of life-affirming cultures that would be precedents, but it is unprecedented at this historic scale.

So, the third attractor requires us to have adequate wisdom to not just direct and guide, but bind the full scope of the technological power that we have and to be able to meet human needs in a way that is compatible with the biosphere. I do not believe that AI alone can do much for a third attractor, but it can be part of a series of integrated solutions. At the core of this is a much better human collective intelligence and collective wisdom, through which individual people are making much better sense of the world about what is true, good and beautiful. It would require much better communicative processes with other human beings for collective sense making and choice making. AI could support that by processing huge amounts of data, without disintermediating humans. They could support a human-centered, wisdom-centered, process. Any version of a third attractor has to have that architecture in place.

e: That would mean that human wisdom guides the AI trajectory rather than market values.

DS: Human wisdom needs to guide the human trajectory, which has to include all of our technological developments and applications, including AI. AI has to support good human choice making, not replace it and not augment it for the purpose of vested interests. It's really easy to get people to click on ads on Facebook that are not supporting humans to make good choices. In this way, technology is weaponized against the person's own sovereignty, it is manipulating their choice making. Now, can we utilize AI to give people much better processing of data? Imagine that I had structured my ChatGPT where I asked it to present a particular situation in the news to me from the centroid of the left perspective and then from the centroid of the right perspective and then to give me additional alternate news perspectives with the provenance of information backing up each of those. And then to have it see if there is a higher order synthesis perspective that seems to best reconcile all of that information. If we structured the information technology in a way that it does things like that, it could be very helpful.

e: What are the capacities that human beings need to develop in order for this to happen?

DS: We just alluded to a lot of things that would have to happen for a third attractor civilization, all of which some people are working on, and more people need to be working on. So, for example, we need to work on mitigating AI risk at the level of hardware, software, and policy. We also need to work on planetary boundary reversal like getting all of the pesticides out of the environment. We also have to work on the future of governance and political economy that includes collective intelligence and collective choice-making. And we also need to think about: What are the types of wisdom most relevant to governing exponential tech well, and what are the best ways to develop such wisdom in people? All of those directions need to be pursued. And we all need to engage in them in the way we are most capable and inspired.

e: Can you explain a bit more why you call it a third attractor?

DS: If you think of a watershed on one side of a mountain, the water can take a lot of different paths down, but it's going to go to the low spot. That low spot is an attractor and water can get to that attractor on a lot of different ways. So, climate change migrants leading to resource war is very different than AI. Synthetic bio-tech creating new pandemics is very different than radical wildfires. There are a lot of catastrophes that don't seem like they have anything to do with each other, but they're all part of a future that is increasingly defined by catastrophic breakdown. We call that an attractor. It's a future state defined by very different things that all have certain properties in common. The dystopia of powerful nation states getting increasingly more control or very powerful corporations getting increasingly more data and control is another dystopic attractor. The third attractor would be an attractor state that avoids harmful applications of power and harmful concentrations of power simultaneously, which implies a wise stewardship of power.

e: It seems that this third attractor would need to be grounded in human motivation. What caused shifts in the blueprint of civilization on this planet were mostly things that one would call religious or something that captures human longing. Right now, given the world as it's set up in the competitive dynamics, what in the human heart and mind can counter that?

DS: I happened to be in the village in the Swiss Alps right now, where J.R.R. Tolkien cognized the Lord of the Rings. In the story there is the ring that enables one to rule everybody. In our history, money was the first technology that was like the one ring to rule them all. The next one that has properties like that is AI, because AI is applicable to every other product, service, problem, and human goal. Money is a proxy for every other type of value that is intrinsically valueless but equals optionality for every form of value. It became the only thing that anybody wanted because it gave the capacity to get everything else that you wanted. The system that generates money and all the things that money gets you has co-opted the longing that you're talking about, that spiritual yearning for something meaningful. Everybody wants the stuff that their Facebook or Instagram feeder or commercials have made them want. Then I’m continuing to be part of the machine in which I continue to get the money to get what I want. This is a corruption of human desire.

Longing is the reason that I'm using the metaphor of an attractor. There is a desire for a radically better, more beautiful future than these two catastrophic futures that we're currently going for. The tricky thing is that the current world system that is driving the catastrophic futures is telling people that it's the only way that they can get everything they want. It's capturing their desire.

e: It tells them what they should want.

DS: Yes, most people have a kind of Stockholm Syndrome with the dominant system. Stockholm Syndrome is where hostages develop a psychological bond with their captors. The current system doesn't bring people happiness. It doesn't fulfill them. They're lonely. They're on psych meds, they're looking for happiness externally. They have nihilism, existential dread, generalized anxiety, PTSD and addictions, but they still don't want to give the system up.

»There is a desire for a radically better, more beautiful future.«

I do think there is a deeper longing that this system can't fulfill. We need to get honest with ourselves about that. That does end up looking like some withdrawals, or detox. When you recognize what is actually worth wanting, you recognize the actual state of the world and the time in which you live and what is worth wanting and meaningful. Then you can ask: How well do the hours of my life reflect that? And if they don't reflect it perfectly, then reflect on what it would look like if they would reflect it perfectly and how to get from here to there.

e: Might AI help in that?

DS: Maybe marginally if someone asks their AI chatbot: “I got this much money saved. I want to stop working at the job that I know is bad for the environment . Where could I live in the world on the amount that I have?” ChatGPT helps me do that research. But that requires that I have already made the shift such that I’m asking these questions. 

e: So, this shift is the first step.

DS: If people really think about climate change, the temperature of the oceans and what that portends and also consider species extinction, chemical pollution, planetary boundaries and the new threat of AI, they’ll say: Do these things play out in any reasonable possibility in my and my kid's life? And the answer is: Yes. Then they would ask: What should I do as a sane response, given what I value and what I care about, to possibly change the trajectory of the future and have inherent dignity in it? I hope people take that question seriously. I hope more people start working more deeply on alternate future possibilities and put all of their life force and creative energy into the direction that they find most inspiring.

 

This interview was conducted by Elizabeth Debold and first published in the German evolve Magazin no. 40.

Author:
Dr. Elizabeth Debold
Share this article: