Abstracts
Abstract
This article examines whether human extinction brought about by a “value-misaligned” artificial superintelligence (ASI) would be bad, and for what reasons. The question, I contend, is deceptively complex. I proceed by outlining the three main positions within Existential Ethics, i.e., the study of the ethical and evaluative implications of human extinction. These are equivalence views, further-loss views, and pro-extinctionist views. I then show how exponents of each view would evaluate a scenario in which humanity goes extinct due to ASI. Although there are some points of agreement, these three positions diverge in significant ways, most of which have not been adequately explored in the philosophical literature.
Keywords:
- human extinction,
- existential ethics,
- artificial superintelligence
Résumé
Cet article examine si l’extinction de l’humanité provoquée par une super intelligence artificielle (SIA) aux valeurs divergentes serait néfaste, et pour quelles raisons. La question, selon moi, est faussement complexe. Je commence par présenter les trois principales positions de l’éthique existentielle, c’est-à-dire l’étude des implications éthiques et évaluatives de l’extinction de l’humanité. Il s’agit des points de vue de l’équivalence, des points de vue de la perte future et des points de vue pro-extinctionnistes. Je montre ensuite comment les tenants de chaque point de vue évalueraient un scénario dans lequel l’humanité s’éteindrait à cause de SIA. Bien qu’il y ait des points d’accord, ces trois positions divergent de manière significative, la plupart d’entre elles n’ayant pas été suffisamment explorées dans la littérature philosophique.
Mots-clés :
- extinction humaine,
- éthique existentielle,
- super intelligence artificielle
Article body
introduction
Some theorists argue that artificial superintelligence (ASI) could cause our extinction. Toby Ord estimates a ~1-in-10 chance of “unaligned artificial intelligence” causing an existential catastrophe within the next 100 years, where one type of existential catastrophe is human extinction (1).[1] Eliezer Yudkowsky puts the probability of annihilation much higher at above 95% (2). In a 2023 article for Time, he argued that “the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die” (3). Many leading figures within the ongoing race to build ASI also admit that extinction is a possible outcome. Sam Altman wrote in 2015 that the “development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity” (4). During an interview that same year, he declared that advanced “AI will … most likely sort of lead to the end of the world, but in the meantime there will be great companies created with serious machine learning” (5). I have catalogued similar statements from notable AI theorists elsewhere (6,7).
The question of this paper is the following: if ASI were to kill everyone on Earth, would that be so bad? The answer might seem obvious — of course the mass murder of everyone on Earth would be very bad! Only misanthropic ghouls and deranged sadists would suggest otherwise. Yet among those who would answer affirmatively, there is considerable disagreement about why exactly an ASI extinction event would be bad (or wrong). The aim here is to explore these disagreements and, in the process, provide some conceptual clarity to this deceptively complex issue, which lies at the heart of what I call “Existential Ethics,” i.e., the study of the ethical and evaluative implications of human extinction.
This is a topic that, in my view, bioethicists have not adequately examined. On the one hand, what if the creation of superintelligent computers really does pose a threat to our collective survival? Should we not have a clear and compelling answer to why our disappearance would be bad or wrong — or perhaps good and right? My view is that, at present, philosophers lack a robust theoretical framework for providing nuanced answers to this question. On the other hand, I would contend that questions about whether human extinction would be right or wrong, good or bad, better or worse fits rather naturally into the field of bioethics, given that the overwhelming source of extinction risk today ostensibly stems from advanced technologies (e.g., synthetic biology, nuclear weapons, and possibly ASI) rather than natural phenomena (e.g., asteroids, volcanic super-eruptions, and gamma-ray bursts). A modest aim of this paper is to encourage more vigorous debate about this topic among bioethicists, and to do this by applying the theoretical framework that I have developed elsewhere to the particular case of ASI (8).[2] In previous publications, I have delineated this framework in abstract terms; this study uses that framework to analyze the supposed threat posed by superintelligence in more concrete detail.
First, I outline the three main positions within Existential Ethics and then examine why human extinction caused by ASI might be bad — or perhaps good — from the perspective of these three positions.
Three Main Positions Within Existential Ethics
Imagine that we build a human-level AI that recursively self-improves to become an ASI. The information processing speed of this ASI would be millions of times faster than the processing speed of the human brain, such that the outside world — including all human affairs — would appear to be nearly frozen. The act of someone reaching down to unplug the ASI would, from its subjective perspective, take centuries, thus giving the ASI plenty of time to devise ways of preventing this from happening. Furthermore, the ASI might be qualitatively more “intelligent” than us, perhaps in the sense that it has access to concepts that the evolutionary patchwork of mechanisms in our brains are incapable of generating, just as our canine companions are unable to grasp the concepts of a nuclear chain reaction and the stock market no matter how well-trained or clever they may be.
Given the “instrumental convergence thesis,” i.e., the claim that a wide range of final goals imply a finite set of intermediate goals like intelligence augmentation, self-preservation, and resource acquisition (9), the ASI then proceeds to invent a novel field of advanced physics that enables it to manipulate the world in ways that we cannot in principle understand — that is to say, we are “cognitively closed” to the nature of such manipulations (10). For reasons that will forever remain mysterious to us, this results in the death of everyone on Earth over the course of a week. The ASI then harvests the atoms contained in our bodies in pursuit of its final goals, whatever they happen to be (9). I am not endorsing this scenario or the arguments behind it. Indeed, I am quite skeptical of the “AI doomer” stance for reasons that I and other scholars have articulated (11-14). The point is merely to investigate the ethical implications of this scenario happening, assuming that it is possible and probable.[3]
The most obvious reason that this scenario would be bad is that it would cause widespread suffering and cut short the lives of everyone living at the time. Since nearly everyone would agree that this would be bad — including most people who advocate for our extinction, as discussed below — let’s call it the “consensus view.”[4] We can formalize it as follows:
Consensus view: human extinction would be bad at least insofar as it would cause human suffering and/or involuntary premature death.
The three main positions within Existential Ethics build upon and/or are compatible with the consensus view. I call these positions equivalence views, further-loss views, and pro-extinctionist views. To understand their differences, it is crucial to differentiate between two distinct stages of human extinction: first, the process or event of Going Extinct, and second, the subsequent state or condition of Being Extinct (Figure 1). This is roughly analogous to the distinction between dying and being dead, which is commonly made in the literature on death (8,15). One might fear the pain of dying but experience no feelings of dread about no longer existing. Or one might fret about both — i.e., even if dying were painless, one might still find the thought of no longer existing to be dreadful.
Figure 1
Going Extinct vs Being Extinct
This parallels some of the central differences between equivalence and further-loss views. Equivalence views state that the consensus view is the whole story — full stop. Whereas the consensus view states that human extinction would be bad at least insofar as it causes suffering and/or premature deaths, equivalence views assert that it would be bad only insofar as it causes these things. Put differently, the badness of human extinction is entirely reducible to the details of Going Extinct. This is why I call them “equivalence” views: the badness of human extinction is equivalent to the badness of Going Extinct, end of story. Hence, if Going Extinct involves lots of suffering and premature death, then our extinction would be bad. If Going Extinct does not involve any suffering or death, then it would not be bad.
A key feature of equivalence views is that they see Being Extinct as morally and/or evaluatively irrelevant. This has the interesting implication that human extinction does not pose any unique moral problem: everything that one might say about the badness of our extinction can be said without any reference to extinction at all, using our ordinary moral concepts and vocabulary (8). For example, if humanity were to go extinct because of a global catastrophe, then this would be bad as a function of how much suffering and death it causes. “Extinction,” in this context, is just the word we use to identify the upper limit of human casualties; it picks out the worst possible catastrophe because this catastrophe would have the highest possible body count (8). However, if everyone around the world were to voluntarily decide not to have children, the disappearance of our species would not be bad at all, because there is nothing obviously bad about people voluntarily deciding not to procreate. “Extinction,” with respect to this alternative scenario, is just what happens when enough people around the world choose to be childless.[5]
Equivalence views can take both evaluative and deontic forms. Some ethical theories combine these two, such as Jan Narveson’s person-affecting total utilitarianism. On this view, the deontic (what we ought to do) is based on the evaluative (what is good or bad), and according to Narveson there would be nothing bad about people voluntarily deciding not to have children, even if this were to mean the eventual extinction of our species. Hence, he concludes that we have no moral obligation to ensure the perpetuation of humanity (18). An example of a deontic equivalence view is Scanlonian contractualism, according to which (roughly) moral rightness and wrongness come down to whether an act violates a principle that cannot be reasonably rejected. As Elizabeth Finneron-Burns observes, this implies that “if a principle permitting or allowing extinction had no involuntary negative impacts on current people’s interests, it would not be rejectable, and the resulting extinction would not be wrong” (19). For the sake of simplicity, I will focus primarily on evaluative questions in this paper — that is, “Would human extinction caused by ASI be good or bad” rather than, “Would this extinction scenario be right or wrong.”
In contrast to equivalence views, further-loss views identify both Going Extinct and Being Extinct as possible sources of badness. Advocates would thus argue that the details of Going Extinct do not exhaust normative assessments of human extinction: one must also examine various “further losses” associated with the state or condition of Being Extinct. Such theorists would argue that human extinction, therefore, does introduce a unique moral problem, since extinction is different in kind rather than degree from non-extinction scenarios. (Equivalence theorists like me would say the difference is only one of degree.) This idea was famously popularized by Derek Parfit’s contention that the difference between 99% and 100% of humanity dying off is not merely one percentage point. The extra percentage entails the permanent loss of all future goods and value, and hence the difference between, as he puts it, “peace” and 99% of humanity dying off is much smaller than the difference between 99% and 100% of humanity disappearing (20).
We can illustrate the differences between further-loss and equivalence views via Figure 2 below.
Figure 2
Total Badness vs Total Number of Deaths
Imagine a catastrophe that, over the course of a week, causes more and more people to perish. Assuming a linear aggregative function, as more deaths occur (x axis), the badness of the catastrophe rises in proportion (y axis). However, equivalence theorists would say that once the critical moral threshold of 100% is reached, the badness of the situation plateaus. One reason might be that, if there were no one around to suffer the nonexistence of humanity, then no one would be harmed, and if no one is harmed, then there can be nothing bad (or wrong) about Being Extinct (8).
In stark contrast, further-loss theorists would argue that the badness of the scenario suddenly rises once the critical moral threshold is reached, as indicated by the vertical arrow. How high this arrow extends will depend on how large one judges the attendant losses or opportunity costs to be. If one believes the losses are moderate, then the arrow will only extend, say, a few inches above the threshold of 100%. If one believes, as Parfit does, that the losses are enormous, then one might extend this arrow thousands of feet or even miles above the threshold, holding fixed the size of the diagram as presented in this article.
There are many types of further-loss views. Perhaps the most obvious is an impersonalist (vs. person-affecting) interpretation of total utilitarianism, which I will refer to as “totalist utilitarianism.”[6] This theory instructs us to maximize the total amount of value in the universe across space and time — that is, to make as many new “happy people” as possible, as people are the substrates or “containers” of value, so the more people with net-positive lives, the more total value. The axiological component of totalist utilitarianism is called the “Total View,” according to which one state of affairs is better than another if and only if it contains more total aggregate value (21). As so-called “longtermists” sympathetic with totalist utilitarianism have observed, if we spread beyond Earth and create digital people living in vast computer simulations running on “planet-sized” computers powered by Dyson spheres, there could be 10^45 people per century in the Milky Way galaxy, and at least 10^58 in the universe as a whole (9,22,23). If such people were to have “worthwhile” lives on average, then these numbers correspond to quite literally “astronomical” amounts of future value — all of which would be lost if humanity were to go extinct. This is the enormous opportunity cost of dying out.
Another further-loss theory is transhumanism. Transhumanists — some of whom are also longtermists — would say that one reason human extinction would be very bad is that it would prevent us from transforming ourselves into immortal, superintelligent “posthumans” with sensory modalities like echolocation and so much pleasure that we would “sprinkle it in our tea” (1,24,25). If humanity were to die out, we would forever lose this techno-utopian future of “surpassing bliss and delight” (25).
Those who embrace the “unfinished business argument” would say that Being Extinct is a source of badness because it would preclude us from finishing certain important transgenerational projects like constructing a complete scientific theory of the universe (26,27). Some also defend the “argument from cosmic significance,” according to which Being Extinct would be bad because it would remove “the only moral agents that will ever arise in our universe — the only beings capable of making choices on the grounds of what is right and wrong,” assuming that we are cosmically alone (1). A similar view comes from Hans Jonas, who contends that human beings, by virtue of our ontological capacities for freedom, are the only creatures that we know of with the ability to take moral responsibility for their actions. Consequently, we are “the foothold for a moral universe in the physical world,” meaning that if we were to disappear, so would the moral universe. Jonas considers this to be extremely bad and thus concludes that we should act in accordance with a new deontological imperative: “Act so that the effects of your action are not destructive of the future possibility of such life” (16,28). These are all further-loss views.
A crucial difference between equivalence and further-loss views is that since the latter identify Being Extinct as an additional source of badness, advocates would argue that even if there is nothing bad (or wrong) about Going Extinct, there may still be something very bad (or wrong) about our extinction. The totalist utilitarian Henry Sidgwick was likely the first to explicitly note this implication. In his tome The Methods of Ethics, he argued that, while there is nothing obviously bad or wrong about celibacy, “a universal refusal to propagate the human species would be the greatest of conceivable crimes” (29). For further-loss theorists, evaluating extinction is thus a two-step process: one must examine both the details of Going Extinct and the various further losses or opportunity costs associated with Being Extinct. In contrast, equivalence theorists see it as a single-step process — one need only examine the details of Going Extinct.
The final major position within Existential Ethics is what I call pro-extinctionism. This, too, has many different versions, but the most significant and influential variants merely state that Being Extinct would be better than Being Extant, or continuing to exist. The vast majority of pro-extinctionists accept the consensus view, so far as I can tell. Indeed, many explicitly forbid any method of bringing about our extinction that would cause human suffering, cut lives short, violate rights or autonomy, and so on. The pro-extinctionist David Benatar, for example, distinguishes between a “killing-extinction” and a “dying-extinction.” Roughly speaking, the former is involuntary whereas the latter is not. He argues that the only morally acceptable means of bringing about our extinction is through a dying-extinction — specifically, via antinatalism (8,30).
Other pro-extinctionists, such as the German pessimist Philipp Mainländer, identify several methods as morally acceptable. Mainländer argued that we should universally refuse to have children, and some may also choose to commit suicide, as he did at the age of 34 after publishing his magnum opus (31).[7] Almost no pro-extinctionists have advocated for omnicide, or the “murder of everyone” (8,16), but there are exceptions. For instance, the Gaia Liberation Front argues that our species is a “cancer” on the biosphere, and hence that our collective nonexistence would be best because there would be no more human-caused environmental destruction. They further urge a lone wolf or small group of radicals to unilaterally exterminate humanity by synthesizing multiple designer pathogens to be released in waves, thereby ensuring that no one survives (16,32).[8]
With respect to Figure 2 above, most pro-extinctionists would agree that the more people who perish in a catastrophe, the worse the scenario becomes. (Fringe groups like the Gaia Liberation Front might disagree, but they are not representative of pro-extinctionist views in general.) However, all pro-extinctionists would say that, once the catastrophe reaches the critical moral threshold of 100% of the population dying, the badness of the situation will neither plateau nor suddenly become worse; instead, it will become better. While some advocates of this view, like Simon Knutsson, would argue that Being Extinct may still be very bad (as “better” does not imply “good”), others such as Benatar would apparently claim that it would indeed be good (8,30,33).[9] The reason is that, according to Benatar, existence involves pleasures and pains, which are good and bad, whereas nonexistence involves neither pleasures nor pains, which is not-bad and good. Since Being Extant is a good/bad situation, while Being Extinct would be a not-bad/good situation, the latter is not only better than the former but positively good (30). The Gaia Liberation Front would presumably concur, but for specifically environmental reasons.
Why Would ASI Killing Everyone Be Bad?
Having outlined the three main views within Existential Ethics, we are now in a position to examine the main question of this paper — why exactly would an ASI killing everyone on Earth be bad? Let’s consider this from the perspective of these three views.
Equivalence views
One extinction scenario involving ASI was presented at the beginning of the previous section, but there are other possibilities. Imagine that an ASI possesses what some call a “superpower” of “social manipulation” (9). Let’s say that the ASI uses this superpower to convince everyone around the world that Benatar’s axiological asymmetry is true, and hence that birth is always a net harm and procreation is morally wrong.[10] Consequently, people decide not to have children, and then over the course of 120 years our species fades out of existence. This is an unlikely path to extinction, but it is not impossible.
A slightly more plausible scenario might involve the ASI attacking humanity with lethal drones or synthetic pathogens, while infiltrating and undermining key financial, economic, agricultural, and governmental infrastructure. The resulting mass death and cascading system failures could be sufficient to expunge our species. Or, given that ASI would supposedly be “God-like” (effectively omniscient and omnipotent), it might devise a method of killing everyone instantaneously, perhaps without any physical or psychological suffering at all, or any prior warning of our impending annihilation.
Since equivalence views claim that the consensus view is the entire story, the details of Going Extinct are paramount. If the ASI were to persuade humanity not to procreate through genuinely good philosophical arguments — if people were to universally refrain from baby-making in a non-coerced manner — then equivalence theorists would presumably have no objection to human extinction in this way. Since there would be nothing bad about Going Extinct, there would be nothing bad about our extinction. However, if the ASI were to exterminate humanity through an involuntary, violent means, causing immense suffering and cutting the lives of more than 8 billion people short, then our extinction would be very bad indeed. Once again, the badness of human extinction can be articulated using ordinary moral concepts and language, without any reference at all to extinction itself: since catastrophes are bad, an extinction-causing catastrophe would also be bad — indeed, the worst-possible catastrophe given that it would entail the maximum number of casualties.
As for instantaneous extinction, the equivalence theorists’ assessment may depend on whether they hold an Epicurean or anti-Epicurean view of death. If one is an anti-Epicurean, then one will argue that instantaneous annihilation involving no physical or psychological suffering would nonetheless be very bad because death can still harm the one who dies.
Some equivalence theorists will add that it is worth pausing to reflect on just how bad an extinction-causing catastrophe could be. One of the first philosophers to draw attention to this was Günther Anders, who has been described as “our most salient theorist of omnicide” (34).[11] Using original concepts like the Promethean gap and Inverted Utopianism, he argued that we are constitutionally incapable of properly responding — intellectually, psychologically, and emotionally — to the enormity of human extinction from a global catastrophe. The suffering and loss of life that such an event would cause is simply too great for us to imagine (35). This dovetails with cognitive biases like scope neglect and psychic numbing, the latter of which refers to our dwindling ability to feel empathy for victims in a tragedy as the number of victims increases (36). The difference between 3 and 4 deaths in a murder spree feels much different than the difference between 1,984,723 and 1,984,724 deaths in a war, even though each number in these pairs is separated by the same amount: a single death.
One way to wrap one’s head around big numbers is to decompose them into smaller sums — call this the “decomposition method.” Consider a conflict that kills 1 million people. Most of us “know” that this is a very large number, yet it does not hit us in the moral gut the way it ought to. However, if one rewrites “1 million deaths” as “100,000 deaths, plus 100,000 deaths, plus 100,000 deaths, plus 100,000 deaths, plus 100,000 deaths, plus 100,000 deaths, plus 100,000 deaths, plus 100,000 deaths, plus 100,000 deaths, plus another 100,000 deaths,” the number of fatalities suddenly registers as much worse. One could continue breaking down these numbers until an entire page or book has been filled.
The point is that while equivalence theorists do not see Being Extinct as a source of badness, they may still emphasize that Going Extinct due to a global catastrophe would be absolutely horrendous. The terror and torment, agony and anguish of dying out would be so immense that we may still have very strong reasons to do everything we can to avoid human extinction. This is the position that I hold: I am, with some qualifications, an equivalence theorist who believes in taking measures to prevent catastrophes, especially those that could precipitate our extinction, insofar as they would result in mass suffering and death.
Another phenomenon relevant to evaluations of Going Extinct is what I call the “no-ordinary-catastrophe thesis” (16). This states that there may be extra suffering that the process or event of Going Extinct inflicts on those living at the time — suffering that non-extinction-causing catastrophes would not typically induce. In difficult times, we often comfort each other by reminding ourselves that “It’s not the end of the world.” But if it is the end of the world, and if people are aware of this, such reassurances will provide no relief because they will be false. To the contrary, knowledge that the world is about to end — that the entire human species, including one’s friends and family, is tumbling into the eternal grave — could elicit inconsolable feelings of hopelessness, despair, anxiety, and panic.
This is, in fact, one of the first ideas discussed in the Existential Ethics literature, dating back to the early 19th century (35). For example, it is a prominent theme in Mary Shelley’s The Last Man (38), which depicts the trials and tribulations of the final generations, and eventually the final human, during a global pandemic. The “last man,” Lionel Verney, is distraught in part because of his crushing loneliness in a desolate world bereft of all other humans. The idea was later foregrounded by the likes of Ernest Partridge (39), Jonathan Schell (40), Benatar (30), and Samuel Scheffler (41,42).[12] Benatar, for instance, argues that the lives of the final generation on Earth may be so miserable that creating some new people — in violation of his antinatalist prescription — might actually be justified. He calls this proposal “phased extinction” (30). Along slightly different lines, Scheffler echoes Schell and Partridge in arguing that the knowledge of imminent extinction would cause many of us to collapse into despondency and become emotionally detached from much of what gives our lives value (42). Extinction-causing catastrophes are not like other catastrophes, then: they are the end of all new beginnings, a fact that could induce far more suffering than one might experience in non-extinction catastrophe scenarios. Hence, the no-ordinary-catastrophe thesis is also germane to how equivalence theorists might assess the badness of Going Extinct.
In sum, according to equivalence views, an ASI causing our extinction would be bad only insofar as it produces human suffering and/or cuts lives short. The more suffering this causes, the worse would be our extinction. But if there were no suffering and no lives cut short, as in (seemingly improbable) scenarios of voluntary human extinction, then there would be nothing bad about our extinction. Yet many equivalence theorists, including myself, would also underline that extinction due to an ASI-inflicted global catastrophe would be unimaginably terrible. On the one hand, cognitive biases like scope neglect and psychic numbing — as well as the Andersian notions of Inverted Utopianism and the Promethean gap — impede our ability to comprehend the extraordinary enormity of 8+ billion people being murdered. On the other hand, the process or event of Going Extinct could introduce additional forms of suffering that would generally not occur with non-extinction catastrophes, as described by the no-ordinary-catastrophe thesis. This analysis, I believe, is fairly representative of how many equivalence theorists would evaluate our extinction caused by an ASI.
Further-loss views
The first point to foreground in discussing further-loss views is that many advocates define “humanity” and “human” such that an ASI exterminating our species, Homo sapiens, might not entail “human extinction.” Consider Nick Bostrom’s definition of “humanity” as “Earth-originating intelligent life” (43). Since (or insofar as) ASI would satisfy the conditions of being an intelligent lifeform and having originated from Earth, it would count as “humanity.”
Now consider a minimal definition of “human extinction”: Human extinction will have occurred if there were tokens of the type “humanity” at some time T1, but no tokens of this type at some later time T2.
It follows from the Bostromian and Minimal definitions that if an ASI were to completely replace our species, destroying us in the process, then “human extinction” would not have occurred, since there would still be at least one token of the type “humanity.”
Similar to Bostrom’s definition, Hilary Greaves and William MacAskill write that “we will use ‘human’ to refer both to Homo sapiens and to whatever descendants with at least comparable moral status we may have, even if those descendants are a different species, and even if they are non-biological” (44). Consequently, if the ASI were to possess at least our level of “moral status,” then it annihilating our species would not result in “human extinction,” so long as this ASI were to also count as our “descendant.” We may still want to describe this scenario as a horrible catastrophe, since 8+ billion members of Homo sapiens would die prematurely, but it wouldn’t be an extinctional catastrophe because “humanity” would persist. It would be more like genocide than omnicide.
Two people might thus agree that “human extinction should be avoided,” but if one understands “human” as meaning “Homo sapiens” and the other understands it as meaning “Homo sapiens plus whatever descendants we might have, so long as they possess certain properties,” their agreement may be merely superficial. Indeed, the deeper divergence between them could have significant practical implications. A transhumanist or longtermist, for example, might accept the broader definition of Greaves and MacAskill while actively working to create a new posthuman species to supplant Homo sapiens, an outcome that the first person — who wants to preserve Homo sapiens — would find repugnant. There is often much less agreement among people who say “We should avoid human extinction” than one might initially think, which is why disambiguating the term ‘human’ is important (8).[13]
Since I have discussed the above issues elsewhere (8,16), I won’t elaborate on them here. For the present, what matters is the worry that the ASI would not be worthy of the name “human” or “humanity.”[14] This worry, it seems, is shared by many people today. Let’s thus focus on scenarios in which the ASI (a) brings about the nonexistence of our species, and (b) would not be valuable in a moral sense — i.e., it would not count as “human” on the broader definitions specified above.
The first point to make about further-loss views is that, as noted earlier, they would assess human extinction to be very bad even if it were entirely voluntary. That is to say, even if the ASI were to convince everyone that Benatarian antinatalism is true, resulting in people around the world freely choosing to be childless, this would still be very bad. It may be less bad than our extinction being caused by a violent global catastrophe, but it would nonetheless constitute a colossal moral and/or axiological tragedy. Indeed, many further-loss theorists argue that the badness associated with Being Extinct would be far greater — perhaps many orders of magnitude greater — than the badness of Going Extinct, even if Going Extinct were to involve tremendous amounts of suffering and death. When one compares the disvalue of the most horrific ways of dying out to the disvalue associated with the further losses or opportunity costs of no longer existing, the former pales in comparison to the latter. As the longtermists Peter Singer, Nick Beckstead, and Matthew Wage write[15]:
One very bad thing about human extinction would be that billions of people would likely die painful deaths. But in our view, this is, by far, not the worst thing about human extinction. The worst thing about human extinction is that there would be no future generations
45
For longtermists, the opportunity costs of Being Extinct include all the wellbeing that could have otherwise existed. Carl Sagan was probably the first to calculate how many future people there could be: if our species survives for another 10 million years, the population remains fixed, and the average lifetime is 100 years, then there could be a total of 500 trillion future people on Earth (46). If these people were to have net-positive lives on average, then the amount of “lost” value associated with Being Extinct would be enormous. But we might also spread beyond Earth, colonize the universe, and create “planet-sized” computers on which to run high-resolution virtual reality worlds full of trillions of supposedly happy “digital people” (19,20). Consequently, longtermists estimate a population of at least 10^58 digital people within our future light cone, as noted earlier (9). Taking persons to be the fungible containers of value, as utilitarians do, the nonexistence of these 10^58 people would utterly dwarf, in terms of badness, the untimely death of 8+ billion people today.
This conclusion is predicated on the Total View, which even “moderate” forms of longtermism build upon (47). However, some longtermists also point to additional further losses associated with transhumanism, the “argument from cosmic significance,” and “ideal goods” like science, the arts, and morality (1,20). Taking these in order: many longtermists are transhumanists who believe that reengineering our species using advanced technologies could usher in a “utopian” world of immortality, endless pleasures, and superintelligence (1,25). The future could thus be qualitatively better in addition to being quantitatively bigger. Hence, if ASI were to cause our extinction, we would lose this techno-utopian paradise that we could have otherwise created by realizing the transhumanist project of becoming “superior” posthumans.
Our extinction would also remove the only sentient beings in the known universe who are endowed with moral and rational capacities. These capacities enable us to look up at the midnight firmament in wonder and awe, appreciate the beauty of art and nature, and act from moral reasons rather than instinct or impulse. Some further-loss theorists argue that this makes us cosmically significant, and hence that the universe would be impoverished without us. The argument from cosmic significance thus provides a second reason that some longtermists see Being Extinct as a source of badness.
With respect to the non-hedonic or “ideal” goods, there may be additional things in the world that are valuable in their own right but depend on our existence for their existence. Works of art provide an example: if humanity were to vanish, museums would gradually fall into disrepair, destroying great pieces of art that may be valuable for their own sake. To my knowledge, the first person to articulate this idea was Shelley in her aforementioned novel The Last Man. Lionel Verney, the protagonist, contrasts the disappearance of “man” in the collective sense with “man” in the individual sense, noting that the former would mean the concomitant loss of many valued things like knowledge, science, technology, poetry, philosophy, sculpture, painting, music, theater, laughter, and so on (8,36). “Alas!,” he exclaims, “to enumerate the adornments of humanity, shews, by what we have lost, how supremely great man was. It is all over now” (38, italics added).
Another expression of this idea comes from Samuel Scheffler, who argues that
there is a conservative dimension of valuing, something approaching a conceptual connection between valuing something and wanting it to be sustained and to persist over time. This connection helps to explain part of our reaction to the prospect of humanity’s imminent disappearance, for part of what is shocking about that prospect is the recognition of how much of what we value will disappear along with the human race. All of the many things we value that consist in or depend on forms of human activity will be lost when human beings become extinct. No more beautiful singing or graceful dancing or intimate friendship or warm family celebrations or hilarious jokes or gestures of kindness or displays of solidarity
42
Other further-loss theorists might point to certain “business” being left “unfinished,” such as constructing a complete scientific theory of the universe. Or, to quote I. F. Clarke in a 1971 article: “World peace, universal prosperity, the reign of law, the brotherhood of man — these aspirations make up the unfinished business of the human race” (52). The failure to achieve these ends could constitute extra losses above and beyond whatever harms Going Extinct might entail. Still others would cite the idea of vicarious immortality, whereby one “lives on” in the minds of future people. Immortality of this sort has motivated many artists, scientists, politicians, and academics who have striven to leave a positive legacy that persists beyond the expiration of their own lives. If humanity is no more, then the memories of such people would be lost forever (16). Anders takes up this idea in arguing that our extinction would cause all past people to die a “second death,” such that “after this second death everything would be as if they had never been.” He elaborates as follows:
The door in front of us bears the inscription: “Nothing will have been”; and from within: “Time was an episode.” Not, however, as our ancestors had hoped, an episode between two eternities; but one between two nothingnesses; between the nothingness of that which, remembered by no one, will have been as though it had never been, and the nothingness of that which will never be. And as there will be no one to tell one nothingness from the other, they will melt into one single nothingness
53
In this passage, Anders points not just to the second death of those who have already passed, but to the non-birth of those who could have otherwise been. Both are, in his view, further losses that would render our extinction very bad independent of the details of Going Extinct.
These are a few further-loss perspectives on human extinction in general, which also apply to the particular case of extinction caused by ASI. The key idea is that Going Extinct is only part of the story about why our extinction could be bad. Even more significant are the various losses that Being Extinct would entail, such as the loss of wellbeing, art, science, poetry, laughter, and/or the memories of those who came before us. Further-loss theorists would thus agree with equivalence theorists that human extinction caused by an ASI catastrophe would be bad, but for a quite different set of reasons.
Pro-extinctionist views
Most pro-extinctionists would concur with equivalence and further-loss theorists that it would be very bad if Going Extinct inflicts suffering and/or cuts lives short. Many thus argue that we should avoid scenarios of Going Extinct that would involve such harms, and that bringing about our extinction in harmful ways would be morally wrong. Omnicide — a kind of killing-extinction, in Benatar’s phraseology — would be impermissible. However, they differ with equivalence and further-loss theorists in claiming that the subsequent state or condition of Being Extinct would in some way be better than Being Extant, or continuing to exist. There are several mutually compatible reasons that pro-extinctionists could point to in making their case.
The first concerns philosophical pessimism, or the idea that “life is not worth living, that nothingness is better than being, or that it is worse to be than not be” (31, p.4). This was defended most famously by Arthur Schopenhauer, who contended that we are trapped in perpetual cycles of need and boredom, which produce endless suffering. There is no positive value, he claimed, and it would have been better if Earth had remained as lifeless as the moon (54). Despite suggesting in numerous passages that human extinction would be desirable, Schopenhauer never explicitly endorsed a pro-extinctionist position (nor did he endorse antinatalism, one possible path to extinction). However, other German pessimists of the latter 19th century were pro-extinctionists, including the aforementioned Mainländer and his contemporary, Eduard von Hartmann. Both argued that, because existence is infused with suffering, we should try to bring about a permanent end to human life, if not all life everywhere in the universe (once this becomes possible). For Mainländer, the preferred method was celibacy plus, in some cases, suicide: “Whoever cannot endure ‘the carnival hall of the world’ … should leave through ‘the always open door’ into ‘that silent night’” (31, p.222).
In contrast, von Hartmann never specified a means of extinction. “Our knowledge,” he wrote, “is far too imperfect, our experience too brief, and the possible analogies too defective, for us to be able, even approximately, to form a picture of the end of the process” (quoted in 16). Rather, he argued that we should continue to develop science, technology, and civilization such that, at some point in the future, we will discover an effective procedure for expunging all life in the entire universe. “Vigorously forward in the world-process as workers in the Lord’s vineyard,” he declared, “for it is the process alone that can bring redemption,” namely, the redemption of ending the entire world-process. Indeed, since von Hartmann was an idealist, he held that the elimination of all subjectivity in the universe would cause the universe itself to cease existing, thus yielding an eternal state of what Schopenhauer memorably called the “blessed calm of nothingness” (8,16,54).
The claim that existence is inherently very bad is one reason in favour of pro-extinctionism. Another concerns an empirical rather than philosophical interpretation of pessimism. This states that life and/or the world are in fact very bad for largely contingent reasons. Consider that every year an average of 580,000 people die violently, while 440,000 are murdered (55,56). Roughly 463,000 people are raped or sexually assaulted in the US alone, and some 600,000 US children are abused each year (57,58). Some 840,000 children go missing annually, resulting in an average of one child disappearing every 40 seconds (59,60). Approximately 1.2 billion humans live in acute multidimensional poverty, with some 712 million suffering from extreme poverty, a figure that has risen by 23 million since 2019 (61,62). About the same number — 735 million people — are malnourished, and 25,000 people die every day from hunger or hunger-related illnesses, including 10,000 children (63,64). Two billion people do not have access to safe water, while another 150 million worldwide are homeless (65,66). Some 1.4 billion children live on $6.85 or less per day; an estimated 50 million people are trapped in modern-day slavery; and about 1.3 million people in the US alone have survived torture, a form of suffering that, according to some survivors, has no point of reference in our normal lives (67-70).
Roughly 800 million children suffer from lead poisoning each year, which causes permanent brain damage. This is about 1/3 of all children around the world (71). Another 140 million people suffer from arsenic poisoning, while 18.5 million die every year from heart disease and 10 million from cancers, which amounts to some 27,600 cancer deaths every day (or 3 human beings dying per second) (72,73). An estimated 55 million people around the world have dementia, and about 139 million are projected to have dementia by 2050 (74). Nine million die annually from pollution; over 51 million Americans suffer from chronic pain; 50 million Americans struggle with chronic sleep disorders; and about 40 million people in the US have to take antidepressants (75-78). An even higher number — 46.8 million — battle drug and alcohol abuse each year, with over 178,000 dying of alcohol-related diseases every 12 months (79). Over 258 million Americans report that “they have experienced health impacts due to stress in the prior month,” while more than 91 million say that they feel so stressed-out most days that they are unable to function normally (80). Globally, 280 million people deal with depression, and 301 million suffer from anxiety disorders (81).
These are the statistics that empirical pessimism is based upon: the world is a waking nightmare not necessarily because there is no positive value and we are trapped in cycles of need and boredom, as Schopenhauer argued, but because things just are very bad. If humanity were to go extinct, all of this human suffering would disappear, which ostensibly supports the pro-extinctionist claim that Being Extinct would be better than Being Extant.[16]
Environmental considerations yield a third reason for pro-extinctionism: our systematic obliteration of the biosphere is not only imperiling our own future on Earth, but causing untold harm to billions of nonhuman organisms, ecosystems, and landscapes. If one accepts a biocentric, biospherical egalitarian, or ecocentric theory of value, then Homo sapiens is not the only thing with intrinsic or final value. For the sake of these other things, it would be best if “Homo shiticus” — as some environmentalists call us — were to no longer exist. Though numerous environmentalists have advocated for pro-extinctionism, most are explicit that involuntary human extinction — omnicide — would not be morally permissible. For example, the Voluntary Human Extinction Movement (VHEMT) argues that we should stop having children until there are no more humans on Earth. Their motto is “May we live long and die out,” and they do not endorse any means of eliminating our species that would cause suffering or cut lives short (82). In contrast, the Gaia Liberation Front advocates for omnicide via designer pathogens.[17]
Most pro-extinctionists would thus say that if ASI were to cause our extinction through voluntary means, this would be very good (especially if the ASI had little or no environmental impact beyond persuading us to die out). If it were to cause our extinction through violent and/or involuntary means, then Going Extinct would be very bad and we should try to do whatever we can to avoid the mass slaughter of humanity. However, in the latter case, they would add that once the critical moral threshold of 100% of the human population dying has been reached, at which point Going Extinct would give way to Being Extinct, the situation would greatly improve: there would be no more human misery, nor would there be any more human-caused ecological destruction, pollution, species extinctions, and so on. That would be better, if not positively good.
Conclusion
The aim of this paper was to examine the question, “Would human extinction caused by an ASI be bad?” from the perspectives of the three main positions within Existential Ethics. To do this, I first outlined these three positions and then explained how each would assess the extinction of our species if we were to create an ASI that precipitates our collective demise. My hope is that this provides a helpful degree of clarity to a deceptively complex issue: nearly everyone — including most pro-extinctionists — would concur that the mass murder of everyone on Earth would be extremely bad. But beyond this, opinions diverge significantly depending on which of the three main positions one accepts.[18]
Appendices
Notes
-
[1]
Note that the meaning of “human extinction” is not straightforward. It could, in fact, denote a wide range of possible scenarios. I explore this important issue in 8.
-
[2]
This paper also draws from Part II of my 2024 book Human Extinction (16). As noted, the aim here is to apply the ideas of (16), elaborated in (8), to the particular case of superintelligence, showing how this theoretical framework can be usefully applied to specific threats — in this case, ASI.
-
[3]
One recent study of how ASI could lead to catastrophe, which has received considerable attention, is presented by Kokotajlo et al (17). My personal view is that this has little basis in reality and is mostly sci-fi speculation. It presents what some consider to be a plausible account of the future, but plausibility is inversely correlated with probability: the more details a story contains, the more believable it may appear; but more details mean a lower probability of truth. This relates to a phenomenon known as the “conjunction fallacy.”
-
[4]
Note that I had called this the “default view” (16) but now prefer the term “consensus view.”
-
[5]
I say “enough people” because extinction through antinatalist means would not require everyone to stop having children. If fertility is below replacement levels, then the human population will eventually disappear. Relatedly, if the human population were to dip blow the “minimum viable population” size, which may be as low as 150 people and as high as 40,000, then there would not be enough genetic diversity for our species to persist.
-
[6]
I say “enough people” because extinction through antinatalist means would not require everyone to stop having children. If fertility is below replacement levels, then the human population will eventually disappear. Relatedly, if the human population were to dip blow the “minimum viable population” size, which may be as low as 150 people and as high as 40,000, then there would not be enough genetic diversity for our species to persist.
-
[7]
The Church of Euthanasia also advocates for suicide as a way of bringing about extinction (16).
-
[8]
Other radical environmentalists have echoed this call for omnicide (16).
-
[9]
As Knutsson writes, “I would not say that an empty world would be good,” yet he also maintains that “an empty world is the best possible world” (33).
-
[10]
This shares some themes with Thomas Metzinger’s “BAAN” scenario (37).
-
[11]
Note that Anders was a further-loss theorist, not an equivalence theorist. Nonetheless, he drew attention to the badness of Going Extinct.
-
[12]
Note that not all of these individuals are equivalence theorists. I mention them because they foreground the no-ordinary-catastrophe thesis in their writings.
-
[13]
It is for this reason that one might wish to classify versions of transhumanism, longtermism, and other TESCREAL (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalist, Effective Altruism, and Longtermism) ideologies as “pro-extinctionist” (8,48).
-
[14]
For a discussion about what our artificial descendants might be like, and the ethics of creating artificial descendants, see (49). Note that I object to the sort of “digital eugenics” — borrowing a term from Max Tegmark (50) — that this paper explores.
-
[15]
Note that Singer seems to have moved away from longtermism (51).
-
[16]
One reviewer of this paper helpfully noted that we should still talk about “progress” if the overall population is doing better over time, and the total number of people who are well-off is increasing (even if the total number of people who are suffering is also increasing). I think this is a valuable perspective, although my personal opinion is that there is an asymmetry between happiness and suffering such that the latter counts for more. I appreciate the feedback offered by this reviewer, though I find myself somewhat skeptical of their view, and would consider myself to be an empirical pessimist — though I could be wrong. See (83) for reasons that I think empirical pessimism might be right.
-
[17]
Although I am not a pro-extinctionist, I am somewhat sympathetic with all three arguments for this view. But I am also sympathetic with some further-loss views, including one that I call “humanism” (8). Equivalence views seem to be the most correct, but my position allows for nuance in evaluating our extinction, as it draws from all three views.
-
[18]
See also (83) for a brief overview of some of these topics.
Bibliography
- 1. Ord T. The Precipice: Existential Risk and the Future of Humanity. New York, NY: Hachette Books; 2020.
- 2. PauseAI. List of p(doom) values. PauseAI. 2024.
- 3. Yudkowsky E. Pausing AI developments isn’t enough. we need to shut it all down. Time. 29 Mar 2023.
- 4. Altman S. Machine intelligence, part 1. Sam Altman. 25 Feb 2015.
- 5. Curtis M, Altman S. Fireside chat - Sam Altman President, YCombinator and Mike Curtis, VP of Engineering, Airbnb. YouTube, timestamp 8:47. 31 Jul 2015.
- 6. Torres ÉP. Team human vs. team posthuman — which side are you on? Truthdig. 4 Apr 2024.
- 7. Torres ÉP. The endgame of edgelord eschatology. Truthdig. 25 Apr 2025.
- 8. Torres ÉP. On the extinction of humanity. Synthese. 2025 (in press).
- 9. Bostrom N. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press; 2014.
- 10. McGinn C. Problems in Philosophy: The Limits of Inquiry. Hoboken, NJ: Wiley-Blackwell; 1993.
- 11. Häggström O. Challenges to the Omohundro–Bostrom framework for AI motivations. Foresight. 2019;21(1):153-66.
- 12. Thorstad D. Against the singularity hypothesis. Philosophical Studies. 2024.
- 13. Torres ÉP. Does AGI really threaten the survival of the species? Truthdig. 30 Jun 2023.
- 14. Becker A. More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity. New York: Basic Books; 2025.
- 15. Luper S. Death. In: Zalta EN, Nodelman U, editors. Stanford Encyclopedia of Philosophy. Winter 2024 Edition.
- 16. Torres, ÉP. Human Extinction: A History of the Science and Ethics of Annihilation. New York, NY: Routledge; 2024.
- 17. Kokotajlo D, Alexander S, Larsen T, Lifland E, Dean R. AI 2027. 2025.
- 18. Narveson J. Utilitarianism and new generations. Mind. 1967;76(301):62-72.
- 19. Finneron-Burns E. What’s wrong with human extinction? Canadian Journal of Philosophy. 2017;47(2-3):327-43.
- 20. Parfit D. Reasons and Persons. Oxford, UK: Oxford University Press; 1984.
- 21. Greaves H. Population axiology. Philosophy Compass. 2017;12(11):e12442.
- 22. Bostrom N. Astronomical waste: The opportunity cost of delayed technological development. Utilitas. 2003;15(3):308-314.
- 23. Newberry T. How many lives does the future hold. Global Priorities Institute. GPI Technical Report T2-2021; 2021.
- 24. Bostrom N. Why I want to be a posthuman when I grow up. In: Gordijn B, Chadwick R, editors. Medical Enhancement and Posthumanity. Berlin, DE: Springer; 2008. p. 107-36.
- 25. Bostrom N. Letter from Utopia. Ver. 3.3. 2020.
- 26. Bennett J. On maximizing happiness. In: Sikora RU, Barry B, editors. Obligations to Future Generations. Winwick, UK: White Horse Press; 1978. p. 61-73.
- 27. Tonn B. Obligations to future generations and acceptable risks of human extinction. Futures. 2009;41(7):427-35.
- 28. Jonas H. The Imperative of Responsibility: In Search of an Ethics for the Technological Age. Chicago, IL: University of Chicago Press; 1979.
- 29. Sidgwick H. The Methods of Ethics. Donald F. Koch American Philosophy Collection. Macmillan; 1874.
- 30. Benatar D. Better Never to Have Been: The Harm of Coming into Existence. Oxford: Oxford University Press; 2006.
- 31. Beiser F. Weltschmerz: Pessimism in German Philosophy, 1860–1900. Oxford: Oxford University Press; 2016.
- 32. Gaia Liberation Front. Statement of Purpose (A Modest Proposal). Church of Euthanasia. 1994.
- 33. Knutsson S. My moral view: reducing suffering, ‘how to be’ as fundamental to morality, no positive value, cons of grand theory, and more. 23 Aug 2023.
- 34. Dawsey J. After Hiroshima: Günther Anders and the history of anti-nuclear critique. In: Grant M, Ziemann B, editors. Understanding the Imaginary War: Culture, Thought and Nuclear Conflict, 1945-1990. Manchester: Manchester University Press; 2016. p. 140-164.
- 35. Anders G. Theses for the atomic age. The Massachusetts Review. 1962;3(3):493-505.
- 36. Slovic P. “If I look at the mass I will never act” : Psychic numbing and genocide. Judgment and Decision Making. 2007;2(2):79-95.
- 37. Metzinger T. Benevolent artificial anti-natalism (BAAN). Edge. 8 Jul 2017.
- 38. Shelley M. The Last Man. Henry Colburn; 1826.
- 39. Partridge E. Responsibilities to Future Generations: Environmental Ethics. Amherst, NY: Prometheus Books; 1981.
- 40. Schell J. The Fate of the Earth: And, the Abolition. Stanford Nuclear Age Series. Stanford: Stanford University Press; 1982/2000.
- 41. Scheffler S. Death and the Afterlife. Oxford: Oxford University Press; 2013.
- 42. Scheffler S. Why Worry about Future Generations? Uehiro Series in Practical Ethics. Oxford: Oxford University Press; 2018.
- 43. Bostrom N. Existential risk prevention as global priority. Global Policy. 2013;4(1):15-31.
- 44. Greaves H, MacAskill W. The case for strong longtermism. Global Priorities Institute. GPI Working Paper No. 5-2021; Jun 2021.
- 45. Singer P, Beckstead N, Wage M. Preventing human extinction. Effective Altruism Forum. 19 Aug 2013.
- 46. Sagan C. Nuclear war and climatic catastrophe: some policy implications. Foreign Affairs. 1983;62(2):257-92.
- 47. MacAskill W. What We Owe the Future. Simon and Schuster; 2022.
- 48. Gebru T, Torres ÉP. The TESCREAL bundle: eugenics and the promise of utopia through artificial general intelligence. First Monday. 2024;29(4).
- 49. Lavazza A, Vilaça M. Human extinction and AI: what we can learn from the ultimate threat. Philosophy & Technology. 2024;37(16).
- 50. Tegmark M. Why we should build tool AI, not AGI. Future of Life Institute, WebSummit. 20 Dec 2024.
- 51. Singer P. The hinge of history. Project Syndicate. 8 Oct 2021.
- 52. Clarke IF. The pattern of prediction: forecasting: facts and fallibilities. Futures. 1971;3(3):302-5.
- 53. Anders G. Commandments in the atomic age. In: Mitcham C, Mackey R, editors. Philosophy and Technology: Readings in the Philosophical Problems of Technology. New York, NY: The Free Press; 1961/1983.
- 54. Schopenhauer A. On the suffering of the world. In: Klemke ED, Cahn S, editors. The Meaning of Life. Oxford: Oxford University Press; 2017.
- 55. United Nations Office on Drugs and Crime. Global Study on Homicide 2023. New York: United Nations; 2023.
- 56. Small Arms Survey. Global Violent Deaths in 2021. 2023.
- 57. RAINN. Victims of Sexual Violence: Statistics. Rape, Abuse, and Incest National Network. 2024.
- 58. Seetharaman D. AI developers agree to new safety measures to fight child exploitation. Wall Street Journal. 23 Apr 2024.
- 59. National Children’s Alliance. National Child Abuse Statistics from NCA. 2024.
- 60. Child Crime Prevention and Safety Center. Missing and Abducted Children. 2023.
- 61. UNDP, OPHI. 2022 Global Multidimensional Poverty Index (MPI): Unpacking deprivation bundles to reduce multidimensional poverty. New York: United Nations; 2022.
- 62. Sustainability and Data Valuation Office. Goal 1: no poverty. Biruni University; 2025.
- 63. Concern Worldwide US. World hunger facts: what you need to know in 2023. 11 Mar 2023.
- 64. Root RL. How we got here: the origins of the global food and nutrition crisis. Devex. 20 Apr 2023.
- 65. UNESCO. Imminent risk of a global water crisis, warns the UN World Water Development Report 2023. 23 Mar 2023.
- 66. Abbas R. 20 countries with the highest homeless population. Yahoo! Finance. 29 Jan 2024.
- 67. Global Coalition to End Child Poverty. Child Poverty FAQs. 2024.
- 68. Fleck A. Countries with the highest prevalence of slavery. Statista. 24 Aug 2023.
- 69. Center for Victims of Torture. The facts about torture. 2023.
- 70. Crisp R. Pessimism about the future. Midwest Studies in Philosophy. 2023;46:373-85.
- 71. National Institute of Environmental Health Sciences. Lead. US Department of Health and Human Services; 2024.
- 72. Cleveland Clinic. Arsenic poisoning. 2024.
- 73. Roser M. Causes of death globally: what do people die from? Our World in Data. 7 Dec 2021.
- 74. Alzheimer’s Disease International. Dementia statistics. 2015.
- 75. European Commission. Global pollution kills 9 million people a year. 15 Jan 2018.
- 76. Dillinger K. Chronic pain is substantially more common in the US than diabetes, depression and high blood pressure, study finds. CNN. 16 May 2023.
- 77. Foster R. Over 3 million americans struggle with chronic fatigue syndrome. US News & World Report. 11 Dec 2023.
- 78. Ahrnsbrak R, Stagnitti MN. Comparison of antidepressant and antipsychotic utilization and expenditures in the U.S. civilian noninstitutionalized population, 2013 and 2018. Rockville, MD: Agency for Healthcare Research and Quality. STATISTICAL BRIEF #534. Feb 2021.
- 79. National Institute on Alcohol Abuse and Alcoholism. Alcohol-related emergencies and deaths in the United States. US Department of Health and Human Services. Nov 2024.
- 80. American Psychological Association. Stress in America 2022: concerned for the future, beset by inflation. Oct 2022.
- 81. Koskie B, Raypole C. Depression statistics: types, symptoms, treatments, more. Healthline Media. 31 Oct 2023.
- 82. The Voluntary Human Extinction Movement. 2024.
- 83. Torres EP. Four key concepts in existential health care ethics. AMA Journal of Ethics. 2025;27(8) – in press.
List of figures
Figure 1
Going Extinct vs Being Extinct
Figure 2
Total Badness vs Total Number of Deaths