Infrastructure for the 3rd Attractor
I would like to propose some theory and from that, suggest a sketch for some infrastructure for the 3rd attractor. KTiffany and I have been writing software for the infrastructure so I will also discuss that.
Coordination failures
A good starting place is from the wonderful work that has been done by Daniel Schmachtenberger on how the vast number of global, catastrophic problems can be seen as symptoms of just a handful of systemic problems. In the YouTube discussion between him and Kevin Owocki, Schmachtenberger described a vicious feedback loop where a poorly planned response to a catastrophe can not only lead to more catastrophes, but can lead to knee-jerk legislation that can increase dystopian government which can then lead to more catastrophes. It is a cycle in which the processes intended to fix individual problems such as climate change or out-of-control AGI end up reinforcing those same problems. Some of the economic and game-theoretic patterns mentioned are perverse incentives, how our system incentivizes offloading hidden or “external” costs onto society, and “multipolar traps”. In addition, there are bad actors who muck up the works, sometimes just for fun or power.
Underlying structure of the problems
If we ignore the problem of bad actors for now, and assume that most people want to do the best action as long as they are safe and thriving, we can note several impediments to that happening.
Communication Bandwidth
No one has time to hear the stories and motivations of everyone else. Yet, for people willing to cooperate, this communication, at some level, is necessary.
Viewpoint Translation
Often, people who want to coordinate, are unable to hear other people’s stories because the perspective or presentation is such that it is interpreted as an attack. This, of course, triggers the fight-or-flight response and shuts down all communication. This is made worse by media outlets that profit by exploiting the phenomenon. It is especially complex when we consider the amazing diversity of people and thought on our planet.
Cognitive Overload
No one has the ability to process all the technical information necessary to make an informed decision. It means that often, people have to choose a person or an institution that they trust and hold fast to their source. Of course, the source is also unlikely to have really and unbiasedly evaluated the various claims. Even in science communities there is often short-term group-think.
If we could solve those three issues
These three issues can be seen as underlying the negative social patterns such as perverse incentives, incentivizing externals and multipolar traps.
In one big, overly simplifying sentence, 1) If each person could hear all the relevant stories about other people’s needs and motivations, and 2) could correctly interpret them, despite the vastly different points-of-view from whence some of them came, and 3) had the knowledge, mental power and time to work through all the logic and all the suggestions, and if they are not-bad actors and willing to work with others as long as their own situation is taken into account, then the the main problems that could not be solved ceteris paribus, are sudden resource shortages and problems stemming from bad-actors and people who represent their needs in bad faith.
But that is most of our problems. And the ability to communicate at such a high resolution would provide a strong foundation for attacking even those problems.
Automating communication and analysis
Imagine that there was some software that could hear people’s issues, understand the different perspectives, and using high-resolution models of the real world do mathematics and run simulations to do analysis of data and the various theories to validate or suggest courses of action that work for everyone. Software that can point out who benefits from a law or practice and who is harmed.
We could quickly debunk (or confirm?) controversial opinions. We could loudly apply public data on who is causing harm to the environment or social system. We could scan for laws that are enforcing unfair or oppressive power structures. We could definitively identify news outlets that consistently skew the news. We can do all that now, but the difference is that it could be publicly definitive and, if done right, universally trusted.
Some problems and possible solutions
What if one group does not trust the models given? One solution is to ensure that the system is truly decentralized and distributed. In addition, it should be easy to fork the system and the models. However, distributions should constantly compare models and search for (non-point-of-view) contradictions or other issues such as models justified by too many ad-hoc hypotheses. They can loudly complain about such finds. For example, someone may try to make a distro that ‘believes’ the world is flat (substitute a crazy political opinion here). But the models won’t cohere and if made public, all the other distros will be loud about it. However, the ‘believers’ made the models themselves. So they cannot easily say they don’t trust the models.
Another problem: Won’t just academics or some other elite group use the system? Most people do not know that they should care. A proposed solution: use the powerful software to solve everyday issues for users so that it acts as a super, AI enhanced operating system. Have it actively try to care for the user but without causing systemic harm. If the software is an order of magnitude better and easier than, say the “buttons for apps” GUI of smartphones, manufacturers will need to add it to their phones and other devices out-of-the-box.
A common objection is that this seems like a dangerous, monolithic solution. It is designed to be the opposite. It is open. There are no single-points-of-failure. It is like saying the Web is monolithic because it is so widely used. It kind-of is, but not in a bad way. Furthermore, consider how hard it will be for the system to go bad. Someone may get their own computer to go rogue or try to become Skynet, but all the devices around it would notice and react and isolate it. There are even protections designed to make it extremely hard for a rogue nation-state to enforce an evil version.
Perhaps the biggest problem is that models are in a formal language. Most people will not learn it. But it turns out that the problem is not so bad because the formal language (or other compatible languages people could create) map easily to complex natural language. Even academic language. So if you speak a language at all, your voice can be heard.
Bad Actors
The problem of bad actors is one that, despite having thought and read a lot about, I do not feel qualified to propose a solution. Instead, I propose that as soon as there are sufficient models in the repository that we do a collective simulation of the issue and of proposed solutions.
The Slipstream
Those of us who have been working on this software infrastructure call it The Slipstream. But because it is a user-side infrastructure, not a platform, it isn’t really one thing It’s the Web, it’s Web3, it’s (d)apps, managed and made easy by a safe, ethical AI. And it is a lot of humans, deeply communicating through their devices to make their confusing world work for them without causing problems for others. And for many, it is about helping the world.
AI based on Causal Inference Models
The AI of the Slipstream is based on representing and using knowledge stored via “Causal Inference Models” or CIMs." Where knowledge can be defined as information plus a formal representation of the meaning of that information. If the information is accurate and the meaning correct, then the knowledge is considered true. Rather than using logic or math where inference proceeds from statements to other statements, inferences are made from information to other information. That provides a vastly more powerful inference engine that can, for example, interpret photos, analyze evidence, consider whether someone is lying, and so on. It can interact via natural language to learn about people and the world. Models can be made to facilitate education, medical treatment, self-improvement, etc. We use CIMs based AI because it is transparent, predictable and safe.
The models are designed to be edited and stored somewhat like a formal, distributed Wikipedia. Different “distros” can be made to have different standards for contributions, but the reference implementation is intended to have academic-like standards with automated “‘peer” review, where the peers are other computers.
The open-source software for all of this is, after nearly a decade, in alpha testing. Our next phase is to crowd-source digitizing foundational knowledge. This will provide the ground work for adding medical knowledge (yay for deSci! But our focus is social.), digitizing legal systems (to analyze them for the structures that keep us in the Metacrisis), aiding education and so on. This phase cannot be done by our tiny team. Please bring us into your communities so we can all make this a world project instead of a bunch of separate teams who are writing AI infrastructure for a new social eco-system.
Also, give us your most skeptical questions and ideas to help us join you and meet the needs of your ideas! You can find more info and links to github at https://theslipstream.com