In lesswrong-style alignment terminology we generally talk about gradual disempowerment of humanity as one of the default bad outcomes. The idea here is that AIs will slowly automate more and more jobs and slowly take over the economy. Ultimately this will result in the concerted economic influence and decision making power of humanity declining until ultimately all of humanity ends up at the whims of whatever the AIs decide.

This is true and certainly very bad from the viewpoint of a unified ‘humanity’ but my claim is that the gradual disempowerment scenario is essentially the lived experience of almost all individual humans anyway, and that the transition to an AI-managed world and civilization, and consequent disempowerment might make little difference to most people in practice. The median human (indeed the vast majority of humans) are already almost totally disempowered today. Disempowerment is the default state for almost all humans throughout the entirety of human history. Consider, you are some median human somewhere:

You already have no control over ‘the economy’ – you work some median job for which there is a rough supply/demand equilibrium. If you do not work the job somebody else basically identical will do it instead. Your job has no great world-historical impact, just making some miscellaneous part of the economy marginally better. Maybe.

When you buy goods and services, almost all such goods trade in such liquid markets that your purchase has effectively zero impact on the price. Similarly, all your purchases are too minor to affect the bottom line of any particular company in more than an infinitesimal way.

The economy has sudden booms and sudden busts. You generally have no real influence on how and when these happen, nor any understanding of why they do. Sometimes these busts take away your job and decrease your quality of life seemingly at random. Fundamental quality of life questions such as whether you can afford to buy a home, raise a family, get healthcare, go on holidays, or have a meaningful job depend, at best, on decisions made by big-shot bankers and bureaucrats in the big city who have zero reason to care about you and who you have no influence upon; at worst they are made by mysterious ‘global economic forces’ over which nobody has any control or understanding at all.

If you live in a democracy you can theoretically vote for different politicians, however the marginal vote has only an infinitesimal impact. The vast majority of votes make no difference at all since they are not in swing districts but gerrymandered solid districts for party A or party B. In the off chance that you are a swing voter and do impact the election, the politician you help elect probably ignores everything they promised to do anyway and just does whatever the donors, or ‘the system’, or the general beliefs of the politician class want them to do vs any campaign promise – the politician-voter alignment problem is extremely far from being solved!

After work you come home from work you sit down and consume media. You do not create any media nor do you have any particular insight into the networks and technologies that created this media and beamed it to your phone. What you watch is chosen somewhat by you but largely by recommendation algorithms that you do not understand and definitely have no impact upon how/where they are deployed. The media ecosystem is so large and vast that where you chose to deploy your marginal attention has an infinitesimal impact on the revenue of large media or tech companies. The ‘culture’ goes through periodic fashions that you passively follow but do not create and cannot modify, and of which you really have no understanding of how/why they happen.

You have no interaction with nor any influence on the development of technology. Science happens far away in universities where very smart people do whatever it is that they do. You have never read, let alone written, a scientific paper nor do you really know what they are. You have no means to influence what technologies get developed or when/how they get deployed except by a.) buying or not buying the technology or b.) voting for politicians who may or may not regulate their deployment. But from above we know that your impact via both of these routes is basically non-existent. You live in a world of technology but have almost zero understanding of the underlying principles by which any of it works.

Technological and cultural change happens regardless of whether you want it to happen or whether it impacts you positively or negatively. Nobody ever asked or listened to you whether you would like your job to be automated or offshored to some other country, whether you would prefer to have the internet and smartphones and social media or not, whether people start believing or disbelieving your religion, whether you like or dislike mass migration to your community, and whether you like or dislike the rapid advancements in AI, and so on. These are simply things that happen. You can react to them to some degree but are almost entirely powerless to shape these forces.

If you have children you have only, at best, a moderate impact on their beliefs and actions, and a rapidly diminishing impact as generations go by. Many (most?) children end up significantly diverging from their parents beliefs and generally follow some intersection of the global culture (over which you have only a vanishing influence) and the idiosyncratic factors of their personality.

And finally, of course, you have no control over your ultimate biology. Your intelligence, personality, physical traits, and propensity to various diseases are largely determined genetically before you are even born, and, of course, you are ultimately doomed to ultimately age, wither away, and eventually die, regardless of what you do.

When written out like this, it all seems exceptionally dystopian and hopeless, but in practice the median person is reasonably happy. They often enjoy their work and social engagements and find deep meaning in their families and hobbies even though the impact on the global trajectory of humanity by such efforts is minimal. Generally, the vast majority of people can find enjoyment and meaning in life without either being ‘the best’ in an absolute sense and without needing to make world-historical impact. Maximizing/totalizing desires for things like control of the lightcone basically do not exist for the overwhelming majority of humans.

In general, even today, when ‘humanity’ is still theoretically in charge, these changes and developments are often only influenced by a tiny fraction of all of humanity. Even if you end up having influence on some specific component, you are almost always just as powerless on almost all other factors. If you are a powerful politician you may be able to impact some aspects of the economy or government policy but are likely helpless before technological or cultural change1. If you are a big tech billionaire, you likely have some counterfactual impact on technological change and how it is deployed but little influence over cultural trends or even global economic conditions. If you are a cultural icon of some sort then you have some means of influencing the development of culture but little ability to shape global economic trends or technological change, and so on. There is no ‘world dictator’ today who can control all of these things simultaneously and largely humanity today simply follows the entropic slope defined by the emergent logic of massive multi-agent systems.

Even in terms of ‘values’, humanity today has no working mechanisms to ensure that these values are transmitted faithfully down the generations. The ‘human values’ of today are dramatically different from those a hundred years ago, let alone a thousand or ten thousand years ago. Across almost all areas and times, total disempowerment is the norm, not the exception2.

If almost total disempowerment is the norm, we should not necessarily expect the median human to find it that bad that the source of their disempowerment has switched from other humans to AIs. As a median human, you already have no impact or connection to scientific advancement and technological change. Why does it matter to you if science and technology are driven forwards by human scientists at a university or AI scientists in a datacenter? Why does it matter to you if the engineers designing your media recommendation algorithms at big tech are human or AI engineers? When your government is an unresponsive bureaucratic behemoth following its own incentives and disregarding almost all voter input, why does it matter whether the individual bureaucrats are humans or AIs? Why does it matter to you whether your economic fortunes are decided by human bankers in glitzy elite city penthouses or by AI high frequency trading systems or AI managers at large AI corporations?

In some ways, assuming that the disempowered humans can still access some fraction of economic output either via rents from owned capital or redistribution, the AI economy could be a massive boon. Almost all work would end except ‘for fun’. The massive advances in technology would undoubtedly cause big improvements in quality of life, especially in biotechnology, entertainment, and the general post-scarcity of many material goods. Potentially, even if only moderately aligned, the quality of AI governance may still be substantially better than human governance, which is still extremely often exceedingly misaligned with the welfare of the median person.

Although, as I’ve written before, initial human capital ownership won’t prevent disempowerment in a global sense, this does not mean that in some slow-takeoff multi-agent scenarios, that having initial capital won’t enable individual humans and their descendants to maintain a potentially high quality of life for a long time off the rents returned from that capital. Even without large amounts of initial capital, it seems plausible that an individual’s quality of life would dramatically improve from transfer payments and the general rising tide of dramatic technological and scientific advances.

If we look again at the industrial revolution parallel. The descendants of aristocrats still do pretty well today at an individual level although they have lost almost all of their institutional power and have been ‘disempowered’. They are still richer, on average, and often have cushy trust-funds and nice family houses in the country. This allows them to sustain a quality of life which is likely much higher than their non-disempowered aristocrat ancestors could achieve – would you prefer to have political/economic power but still die of smallpox vs much less power but modern amenities such as electricity, modern medicine, air travel, etc? Even the situation of the poor is also vastly better today than in e.g. the 18th century. In developed countries, the poor almost all have access to government provided food, healthcare, and housing which is vastly superior to what their 18th century equivalents had access to3.

Zooming even further out to look at the far future. What kind of resources could the disempowered remnants of humanity expect? The key thing to realize here is that space is big. Really big.

It is estimated that there are 200 billion to approximately 1T galaxies within the observable universe. Of these, some large fraction are reachable for colonization if we get a move on. If we imagine our AI civilization rushing to colonize these galaxies, it will eventually (and rapidly based on e.g. fraction of the stelliferous era taken) become a universe-spanning civilization full of multifarious intelligences some of which may span entire galaxies tiled with computronium while others will be much more computationally bounded, all the way down to potentially some descendants/remnants of biological humanity. This is an almost unimaginable cosmic bounty, far beyond anything we can concretely imagine today.

If humanity received the equivalent of even a billionth of these stars for its own use, this would support hundreds of galaxies of mass-energy, supporting trillions of humans who all can live lives of material abundance – either in reality or in indistinguishable virtual worlds. If this happens, then the future will support populations of trillions upon trillions of humans and transhuman descendants existing in a broad spanning intergalactic civilisation with effective immortality and post-scarcity for the basically everybody as well as complete uploading and this situation will last for billions of years at least until the end of the stelliferous era and likely far beyond.

How much of this can humanity feasibly retain even in a gradual disempowerment scenario? This is highly uncertain. In some scenarios, humanity could retain zero. These mostly cluster around singleton scenarios where all resources fall into the hands of some dominant superintelligence. If this happens, the singleton has no incentive to give anything whatsoever to humanity unless successfully aligned. If a highly centralized singleton world is expected, therefore, then perfect alignment (and maintenance of that alignment) of the singleton is of paramount importance.

However, in a more distributed multi-agent slow-takeoff world, things can differ. Here, there is likely to be a large population of AIs with varying goals and degrees of alignment or willingness to charitably support humans. Beyond even alignment of the individual AIs, there may be other reasons for the AI civilizations that humanity spawns to spare some resources towards biological humans or their descendants. Firstly, it is likely that at least during the initial phase the AI economy will operate upon similar lines to the existing human one, and respecting existing property rights is a general schelling point for any AIs interacting into this economy. This means that existing human property rights may be respected, although they will be rapidly diminishing in importance compared to the overall size of the total economy. The fact that the fraction of the economy devoted to human property rights here, ironically, becomes a good thing as it reduces the incentives for the AIs to coordinate to extort human’s property rights since as a fraction of the whole economy they fade into insignificance.

Secondly, some degree of charity and care for the life of other intelligent beings could even be instrumentally useful to demonstrate cooperativeness to other AIs, in a similar way, and for the same reason, as how virtue signalling is important to humans. Caring and supporting actually existing humans might be a reasonable schelling point here both due to their antiquity at this point as well as the fact that the AI civilization was ultimately birthed from humanity and thus has long links to the past which AIs would not forget. This is similar to how, if e.g. neanderthals were still around today, it seems unlikely that modern day society would just exterminate them vs e.g. caring for them, keeping them in specially designed preserves, or just simply giving them generous welfare despite very poor integration into modern society. Other motivations would be simple curiosity about biological life, some kind of intrinsic respect or positive feeling towards humanity instilled by alignment training or simply humanity’s historical role.

Crucially not all AIs would have to be aligned for this to happen. Keeping large numbers of biological humans around is extremely cheap for any serious intergalactic civilization. Especially if the humans themselves are uploaded into virtual worlds, then simply maintaining and running the mindpatterns of billions of humans could easily be a hobby project of a single jupiterbrain which would take only a tiny fraction of its resources.

One very uncertain possibility is to consider the fraction of GDP rich societies spend today on e.g. welfare, charitable giving, foreign aid, or even e.g. caring for animals or the environment. These range from approximately 30% of GDP (welfare) to 0.1% (foreign aid) and potentially less for animal caring or environment (although given how many people personally donate to these it could be more). Of course, we cannot obviously just naively extrapolate these numbers given the absolutely titanic changes that are expected in the transition from first world human society to post-singularity AI civilization, however these numbers are vastly higher than the billionth (0.00000001%) number we were working with before so it gives a lot of room for error.

Whether legacy humans would receive any resources in the long run of a post-singularity society in large part depends upon the degree of competitiveness in the AI society. If the super distributed AI economy is actually intensely competitive then this competition will slowly squeeze out any resources given to humanity. Almost all analyses of the multi-polar world assume some kind of almost perfect Malthusian competition among AIs, however this is far from certain. Certainly AIs are possibly hyper replicators in that copying an AIs ‘brain’ might be trivial (note this is only true for ‘small’ AI minds and not for large ones4). However AIs are also potentially super-cooperators capable of speed-of-light telepathy and mind-merging and expected to be of very high intelligence who should be very capable of avoiding ruinous commons-burning Malthusian competition amongst themselves. The structure of the universe itself also somewhat militates against intense universal competition since the mass distribution of the universe is highly spiky with very concentrated points of mass and energy – e.g. stars and galaxies – surrounded by massive voids. The speed of light also stops rapid transport (even of minds/information) between these points. This makes space warfare mostly defense dominant which implies that within a concentrated region of space there is likely potential slack for pursuing non-competitive ends. The speed of communication also places strong limits upon the size of minds but also allows for immense centralization within small dense regions. One possible equilibrium is simply separate AI minds one per star which have unified all the minds within their solar systems worth of mass/energy and which maintain only occasional long range interactions with other minds which take centuries or millenia per round of communication compared to the incredibly rapid pace of their own minds. These minds would then have immense amounts of slack to pursue really whatever they want in their own local space. Moreover, one update I have made since my previous thoughts on AI evolution is that evolution is a weaker attractor than I thought even in a world of a high degree of multipolarity because the ‘mutation rate’ of AI mind-copying can be brought to almost zero. This means that for AI ‘reproduction’ this implies either copying themselves or designing their successors. This means that, assuming alignment can be solved either by us or our AI creations which also have ample motivation to solve it, there is vastly more lock in of values and intrinsic mind-type than is true of biological reproduction, which massively reduces the surface area for evolution to take hold. It seems very plausible that within a small highly networked region, a large number of AIs could coordinate to prevent this kind of runaway competition and would be highly motivated to do so since Malthusian competition hurts their own values as well as humans.

Ultimately, it is very hard to say for sure what the cooperation-competition axis will look like for a future multi-polar post-singularity AI society. However, there are certainly worlds where humans manage some degree of alignment and then inter-AI hypercooperation preserves some degree of slack thereby enabling humanity to still access some level of resources, which could be extremely large given the size of the cosmic bounty. Apart from singleton-utopia style scenarios, it is hard to imagine realistic post-singularity scenarios which look better than this for existing and future biological humans.

  1. Individual governments today seem generally pretty powerless in the face of global trends and seem to have immense difficulty not just falling towards the entropic attractor. See e.g. the sudden rise of ‘woke’ ideology in the 2010s and followed by the rise of ‘MAGA/alt-right’ ideology a few years later; the mass post-covid immigration spike simultaneously across all anglosphere countries; the global continual housing price rise; the periodic swings which are almost uniform across many different countries towards or away from social and/or economic liberalism; the increasingly unsupportable slow-brewing entitlement crisis across all developed country governments, or the seeming inability of governments to have any real impact upon fertility rates in both developed or developing countries. 

  2. The exceptional situation is today where through the possibility of incredible centralization of power via singleton, expansion across the universe, and fidelity of mind-preservation, there is a small chance of the faithful preservation and spread of ‘our values’ throughout the universe. Whether this needle can be threaded successfully remains to be seen. 

  3. And none of this is funded by the poor living off rents from 18th century investments but instead by essentially the charity of a vastly richer society. 

  4. For instance consider an actual Jupiter-brain AI. Its’ ‘brain’ is a vast constellation of computing hardware at the solar-system scale which would be highly nontrivial to copy. Even if you don’t care about running the AI but simply copying its mind, that mind could plausibly contain a Dyson sphere’s worth of computronium worth of bytes. It could not be copied without constructing a similarly sized Dyson sphere elsewhere and even then interstellar transmission of this much data would be a very slow undertaking. If an individual AI is ‘small’ relative to its computing infrastructure of its host civilization then it can easily be copied, but not if it is ‘large’. The most powerful computational AIs might end up sitting near the centre of galaxies where it is easy to amass many star-system’s worth of computronium and from there would be highly immobile and unable to trivially self-replicate. Certainly they could ‘shard’ parts of their mind off and replicate these more widely, and indeed it is an open question to what extent such a gigantic mind would be ‘unified’ at all vs more modular. If you actually sit and think about the logistics of e.g. a jupiter brain you realize that the communication demands are significantly harder to realize than the computational. The speed of light puts fundamental, albeit large, limits on the size of a ‘synchronous’ mind. The human brain produces a unified consciousness with communication between brain regions taking approximately 40ms. If we assume 100ms is necessary for synchronous communication, this means that a volume of 300,000 km^3 is the largest volume of computronium that can support synchronous thought for an AI system. This is, of course, an absolutely massive volume of compute, and just under the diameter of the earth. However is is significantly smaller than a full Dyson sphere. This also creates a fundamental tradeoff between the computational intensity of a synchronous thought and its speed. If an AI thinks at the speed of a human (a few Hz), then it can pull together compute from a region roughly the volume of the earth. If it wants to think coherently at GHz speeds, it can only synchronize across a volume of approximately 0.3m, or conveniently close to the size of a modern GPU wafer. However the fully synchronous thoughts of the largest AIs might move very slowly, even by human standards. A fully synchronous ‘thought’ utilizing the power of a full dyson sphere might take several minutes to fully propagate, restricting the AI to thinking 10-100 times slower than a human does. However, obviously the full computational power these ‘thoughts’ contain would utterly dwarf anything we could hope to compute today. In practice, such a mind would be very unlikely to be fully synchronous. Instead it would contain many subregions which are highly interconnected and colocated together physically, and operating at much faster speeds. Then the information processed by these regions would be pooled together if needed only occasionally. Interestingly, the human brain uses a similar hierarchical small world topology for its own communication, as it is designed with similar delay issues arising from the much slower signal propagation of neurons than the speed of light. For the AI, we are also giving the upper bound of the speed of light not accounting for other inevitable communication delays or processing overheads.