{"id":89316,"date":"2025-04-04T01:45:05","date_gmt":"2025-04-04T01:45:05","guid":{"rendered":"https:\/\/80000hours.org\/?post_type=problem_profile&#038;p=89316"},"modified":"2026-03-18T17:52:39","modified_gmt":"2026-03-18T17:52:39","slug":"gradual-disempowerment","status":"publish","type":"problem_profile","link":"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/","title":{"rendered":"Gradual disempowerment"},"content":{"rendered":"<div id=\"toc_container\" class=\"toc_white no_bullets\"><p class=\"toc_title\">Table of Contents<\/p><ul class=\"toc_list\"><li><a href=\"#why-might-gradual-disempowerment-be-an-especially-pressing-problem\"><span class=\"toc_number toc_depth_1\">1<\/span> Why might gradual disempowerment be an especially pressing problem?<\/a><\/li><li><a href=\"#how-pressing-is-this-issue\"><span class=\"toc_number toc_depth_1\">2<\/span> How pressing is this issue?<\/a><\/li><li><a href=\"#what-are-the-arguments-against-this-being-a-pressing-problem\"><span class=\"toc_number toc_depth_1\">3<\/span> What are the arguments against this being a pressing problem?<\/a><\/li><li><a href=\"#what-can-you-do-to-help\"><span class=\"toc_number toc_depth_1\">4<\/span> What can you do to help?<\/a><ul><li><a href=\"#key-organisations-in-this-space\"><span class=\"toc_number toc_depth_2\">4.1<\/span> Key organisations in this space<\/a><\/li><\/ul><\/li><li><a href=\"#learn-more\"><span class=\"toc_number toc_depth_1\">5<\/span> Learn more<\/a><\/li><\/ul><\/div>\n<h2><span id=\"why-might-gradual-disempowerment-be-an-especially-pressing-problem\" class=\"toc-anchor\"><\/span>Why might gradual disempowerment be an especially pressing problem?<\/h2>\n<p>Advancing technology has historically benefited humanity. The invention of fire, air conditioning, and antibiotics have all come with some downsides, but overall they&#8217;ve helped humans live healthier, happier, and more comfortable lives.<\/p>\n<p>But this trend isn&#8217;t guaranteed to continue.<\/p>\n<p>We&#8217;ve written about how the development of <a href=\"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/\">advanced AI technology poses existential risks<\/a>. One prominent and particularly concerning threat model is that as AI systems get more powerful, they&#8217;ll develop interests that are not aligned with humanity. They may, unbeknownst to their creators, become <a href=\"https:\/\/arxiv.org\/abs\/2206.13353\">power-seeking<\/a>. They may <a href=\"https:\/\/arxiv.org\/abs\/2311.08379\">intentionally deceive us<\/a> about their intentions and use their superior intelligence and advanced planning capabilities to disempower humanity or drive us to extinction.<\/p>\n<p>It&#8217;s possible, though, that the development of AI systems could lead to human disempowerment and extinction even if we succeed in preventing AI systems from becoming power-seeking and scheming against us.<\/p>\n<p>In <a href=\"https:\/\/gradual-disempowerment.ai\/\">a recent paper<\/a>, Jan Kulveit and his co-authors call this threat model <em>gradual disempowerment<\/em>. They argue for the following six claims:<\/p>\n<ol>\n<li>Large societal systems, such as economies and governments, tend to be <em>roughly<\/em> aligned to human interests.<\/li>\n<li>This rough alignment of the societal systems is maintained by multiple factors, including voting systems, consumer demand signals, and the reliance on human labour and thinking.<\/li>\n<li>Societal systems that rely less on human labour and thinking \u2014 and rely more on increasingly advanced and powerful AI systems \u2014 will be less aligned with human interests.<\/li>\n<li>AI systems may indeed outcompete human labour for key roles in societal systems in part <em>because<\/em> they can more ruthlessly pursue the directions they&#8217;re given. And this may cause the systems to be even less aligned with human interests.<\/li>\n<li>If one societal system becomes misaligned with human interests, like a national economy, it may increase the chance that other systems become misaligned. Powerful economic actors have historically wielded influence over national governments, for example.<\/li>\n<li>Humans could gradually become disempowered, perhaps permanently, as AIs increasingly control societal systems and these systems become increasingly misaligned from human interests. In the extreme case, it could lead to human extinction.<\/li>\n<\/ol>\n<p>Kulveit et al. discuss how AI systems could come to dominate the economy, national governments, and even culture in ways that act against humanity&#8217;s interests.<\/p>\n<p>It may be hard to imagine how humans would let this happen, because in this scenario, the AI systems aren&#8217;t being actively deceptive. Instead, they follow human directions.<\/p>\n<p>The trouble is that due to competitive pressures, we may find ourselves narrowly incentivised to hand over more and more control to the AI systems themselves. Some human actors \u2014 corporations, governments, or other institutions \u2014 will initially gain significant power through AI deployment, using these systems to advance their interests and missions.<\/p>\n<p>Here&#8217;s how it might happen:<\/p>\n<ul>\n<li>First, economic and political leaders adopt AI systems that enhance their existing advantages. A financial firm deploys AI trading systems that outcompete human traders. Politicians use AI advisers to win elections and keep voters happy. These initial adopters don&#8217;t experience disempowerment \u2014 they experience success, which encourages their competitors to also adopt AI.<\/li>\n<li>As time moves on, humans have less control. Corporate boards might try to change direction against the advice of their AIs, only to find share prices plummeting because the AIs had a far better business strategy. Government officials may realise they don&#8217;t understand the AI systems running key services enough to change what they&#8217;re doing successfully.<\/li>\n<li>Only later, as AI systems become increasingly powerful, might there be signs that the systems are drifting out of alignment with human interests \u2014 not because they are trying to, but because they are advancing proxies of success that don&#8217;t quite line up with what&#8217;s actually good for people.<\/li>\n<li>In the cultural sphere, for example, media companies might deploy AI to create increasingly addictive content, reshaping human preferences. What begins as entertainment evolves into persuasion technology that can shape political outcomes, diminishing democratic control.<\/li>\n<\/ul>\n<p>Once humans start losing power in these ways, they may irreversibly have less and less ability to influence the future course of events. Eventually, their needs may not be addressed at all by the most powerful global actors. In the most extreme case, the species as we know it may not survive.<\/p>\n<p>Many other scenarios are possible.<\/p>\n<p>There are some versions of apparent &#8220;disempowerment&#8221; that could look like a utopia: humans flourishing and happy in a society expertly managed and fundamentally controlled by benevolent AI systems. Or maybe one day, humanity will decide it&#8217;s happy to cede the future to AI systems that we consider worthy descendants.<\/p>\n<p>But this risk is that humanity could &#8220;hand over&#8221; control <em>unintentionally<\/em> and in a way that few of us would endorse. We might be gradually replaced by AI systems with <a href=\"https:\/\/80000hours.org\/problem-profiles\/moral-status-digital-minds\/#3-we-dont-know-how-to-assess-the-moral-status-of-ai-systems\">no conscious experiences<\/a>, or the future may eventually be dominated by fierce <a href=\"https:\/\/philpapers.org\/archive\/ASSWHC.pdf\">Darwinian competition<\/a> between various digital agents. That could mean the future is sapped of most value \u2014 a catastrophic loss.<\/p>\n<p>We want to better understand these dynamics and risks to increase the prospects that the future goes well.<\/p>\n<h2><span id=\"how-pressing-is-this-issue\" class=\"toc-anchor\"><\/span>How pressing is this issue?<\/h2>\n<p>We feel very uncertain about how likely various gradual disempowerment scenarios are. It is difficult to disentangle the possibilities from related risks of power-seeking AI systems and questions about the <a href=\"https:\/\/80000hours.org\/problem-profiles\/moral-status-digital-minds\/\">moral status of digital minds<\/a>, which are also hard to be certain about.<\/p>\n<p>Because the area is steeped in uncertainty, it&#8217;s unclear what the best interventions are. We think more work should be done to understand this problem and its potential solutions at least \u2014 and it&#8217;s likely some people should be focusing on it.<\/p>\n<h2><span id=\"what-are-the-arguments-against-this-being-a-pressing-problem\" class=\"toc-anchor\"><\/span>What are the arguments against this being a pressing problem?<\/h2>\n<p>There are several reasons you might not think this problem is very pressing:<\/p>\n<ul>\n<li>You might think it will be solved by default, because if we avoid other risks from AI, advanced AI systems will help us navigate these problems.<\/li>\n<li>You might think it&#8217;s very unlikely that AI systems, if not actively scheming against us, will end up contributing to an existential catastrophe for humanity \u2014 even if there are some problems of disempowerment. This might make you think this is an issue, but not nearly as big an issue as other, more existential risks from AI.<\/li>\n<li>You might think there just aren&#8217;t good solutions to this problem.<\/li>\n<li>You might think the gradual disempowerment of humanity wouldn&#8217;t constitute an existential catastrophe. For example, perhaps it&#8217;d be good or nearly as good as other futures.<\/li>\n<\/ul>\n<h2><span id=\"what-can-you-do-to-help\" class=\"toc-anchor\"><\/span>What can you do to help?<\/h2>\n<p>Given the relatively limited state of our knowledge on this topic, we&#8217;d guess the best way to help with this problem is likely carrying out more research to understand it better. (Read more about <a href=\"https:\/\/80000hours.org\/skills\/research\/\">research skills<\/a>.)<\/p>\n<p>Backgrounds in philosophy, history, economics, sociology, and political science \u2014 in addition to machine learning and AI \u2014 may be particularly relevant.<\/p>\n<p>You might want to work in <a href=\"https:\/\/80000hours.org\/career-reviews\/academic-research\/\">academia<\/a>, <a href=\"https:\/\/80000hours.org\/career-reviews\/think-tank-research\/\">think tanks<\/a>, or at nonprofit research institutions.<\/p>\n<p>At some point, if we have a better understanding of threat models and potential solutions, it will likely be important to have people working in <a href=\"https:\/\/80000hours.org\/career-reviews\/ai-policy-and-strategy\/\">AI governance and policy<\/a> who are focused on reducing these risks. So pursuing a career in AI governance, while building an understanding of this emerging area of research as well as the other major AI risks, may be a promising strategy for eventually helping to reduce the risk of gradual disempowerment.<\/p>\n<p>Kulveit et al. suggest some approaches to mitigating the risk of gradual disempowerment, including:<\/p>\n<ul>\n<li><strong>Measuring and monitoring<\/strong>\n<ul>\n<li>Develop metrics to track human and AI influence in economic, cultural, and political systems<\/li>\n<li>Make plans to identify warning signs of potential disempowerment<\/li>\n<\/ul>\n<\/li>\n<li><strong>Preventing excessive AI influence<\/strong>\n<ul>\n<li>Implement regulatory frameworks requiring human oversight<\/li>\n<li>Apply progressive taxation on AI-generated revenues<\/li>\n<li>Establish cultural norms supporting human agency<\/li>\n<\/ul>\n<\/li>\n<li><strong>Strengthening human control<\/strong>:\n<ul>\n<li>Create more robust democratic processes<\/li>\n<li>Ensure that AI systems remain understandable to humans<\/li>\n<li>Develop AI delegates that represent human interests while remaining competitive<\/li>\n<\/ul>\n<\/li>\n<li><strong>System-wide alignment<\/strong>\n<ul>\n<li>Research &#8220;ecosystem alignment&#8221; that maintains human values within complex socio-technical systems<\/li>\n<li>Develop frameworks for aligning civilisation-wide interactions between humans and AI<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3><span id=\"key-organisations-in-this-space\" class=\"toc-anchor\"><\/span>Key organisations in this space<\/h3>\n<p>Some organisations where you might be able to do relevant research include:<\/p>\n<ul>\n<li><a href=\"https:\/\/acsresearch.org\/about\">Alignment of Complex Systems Research Group (ACS)<\/a> \u2014 which led the work on the &#8220;Gradual Disempowerment&#8221; paper discussed above<\/li>\n<li><a href=\"https:\/\/www.forethought.org\/\">Forethought Research<\/a><\/li>\n<li><a href=\"https:\/\/www.openphilanthropy.org\/\">Coefficient Giving<\/a><\/li>\n<li><a href=\"https:\/\/rethinkpriorities.org\/\">Rethink Priorities<\/a><\/li>\n<\/ul>\n<p>You can also explore roles at other <a href=\"https:\/\/jobs.80000hours.org\/organisations?refinementList%5Bproblem_areas%5D%5B0%5D=AI%20safety%20%26%20policy\">organisations that work on AI safety and policy<\/a>.<\/p>\n<p><strong>Our job board features opportunities in <a href=\"\/job-board\/ai-safety-policy\/\">AI safety and policy<\/a><\/strong>:<\/p>\n<p><script>\n    function getLocationString(arr) {\n      if (arr.length <= 3) { \n        return arr.join(\"<br \/>\");\n      }\n      return arr.slice(0, 3).join(\"<br \/>\") + \"...\";\n    }\n  <\/script><script>\n    function getUniqueCompanyJobs(jobs, limit) {\n      const uniqueCompanies = new Set();\n      const uniqueJobs = [];\n      const additionalJobs = [];\n      for (const job of jobs) {\n          const company = job.company_name;\n          if (!uniqueCompanies.has(company)) {\n              uniqueCompanies.add(company);\n              uniqueJobs.push(job);\n          } else {\n              additionalJobs.push(job);\n          }\n      }\n      return uniqueJobs.concat(additionalJobs).slice(0, limit);\n    }\n  <\/script><script>\n    window.addEventListener(\"load\", function() {\n        const container = document.querySelector(\"#vacancies-1\");\n        if (container) {\n          const searchClient = algoliasearch(\"W6KM1UDIB3\", \"d1d7f2c8696e7b36837d5ed337c4a319\");\n          searchClient.initIndex(\"jobs_prod\"); \n          const search = instantsearch({\n            indexName: \"jobs_prod\",\n            searchClient,\n          });\n          search.addWidget(\n            instantsearch.widgets.configure({\n              facetFilters: [[\"tags_area:AI safety & policy\"]],\n              hitsPerPage: 6,\n            })\n          );\n          search.addWidget({\n            render(options) {\n              const results = getUniqueCompanyJobs(options.results.hits, 3);\n              results.forEach(item => {\n                item.post_pk = DOMPurify.sanitize(item.post_pk);\n                item.company.logo_url = DOMPurify.sanitize(item.company.logo_url);\n                item.title = DOMPurify.sanitize(item.title);\n                item.company.name = DOMPurify.sanitize(item.company.name);\n                item.card_locations = DOMPurify.sanitize(getLocationString(item.card_locations));\n                item.posted_at_relative = DOMPurify.sanitize(item.posted_at_relative);\n              });\n              container.innerHTML = results.map(item => {\n                return `<\/p>\n<li class=\"vacancy border\">\n                    <a href=\"https:\/\/jobs.80000hours.org\/?jobPk=${item.post_pk}\" target=\"_blank\" rel=\"noopener noreferrer\" class=\"vacancy-summary pt-2 pb-2\"><\/p>\n<div class=\"col-12\">\n<div class=\"row\" style=\"position: relative;\">\n<div class=\"col-sm-8\" style=\"overflow: hidden;\">\n<div class=\"vacancy__org-logo\">\n                              <img decoding=\"async\" src=\"${item.company.logo_url}\">\n                            <\/div>\n<div class=\"vacancy__job-title-and-org-name\">\n<h5 class=\"vacancy__job-title tw--line-clamp-2\">${item.title}<\/h5>\n<p class=\"vacancy__org-name tw--line-clamp-2\">${item.company.name}<\/p><\/div><\/div>\n<div class=\"col-sm-4 text-right hidden-xs vacancy__location-and-date-listed\">\n<p class=\"pr-1\">${item.card_locations}<br \/>${item.posted_at_relative}<\/p><\/div><\/div><\/div>\n<p>                    <\/a>\n                  <\/li>\n<p>`;\n              }).join(\"\");\n            }\n          });\n          search.start();\n        }\n      });\n    <\/script><\/p>\n<ul id=\"vacancies-1\" class=\"!tw--p-0 no-visited-styling disable-url-preview-on-hover-for-descendants\"><\/ul>\n<p><a href=https:\/\/jobs.80000hours.org\/?refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy class=\"btn btn-primary\" target=\"_blank\">View all opportunities<\/a><\/p>\n<h2><span id=\"learn-more\" class=\"toc-anchor\"><\/span>Learn more<\/h2>\n<ul>\n<li><a href=\"https:\/\/gradual-disempowerment.ai\/\">Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development<\/a> by Jan Kulveit et al.<\/li>\n<li>Our interview with David Duvenaud on <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/david-duvenaud-gradual-disempowerment\/\">why &#8216;aligned AI&#8217; could still kill democracy<\/a><\/li>\n<li>Our interview with Rose Hadshar on <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/rose-hadshar-ai-extreme-power-concentration\/\">why automating human labour will break our political system<\/a><\/li>\n<li><a href=\"https:\/\/arxiv.org\/abs\/2401.07836\">Two Types of AI Existential Risk: Decisive and Accumulative<\/a> by Atoosa Kasirzadeh<\/li>\n<li><a href=\"https:\/\/www.lesswrong.com\/posts\/HBxe6wdjxK239zajf\/what-failure-looks-like\">What failure looks like<\/a> by Paul Christiano<\/li>\n<li><a href=\"https:\/\/intelligence-curse.ai\/\">The Intelligence Curse<\/a> by Luke Drago and Rudolf Laine<\/li>\n<li><a href=\"https:\/\/philpapers.org\/archive\/ASSWHC.pdf\">Will Humanity Choose Its Future?<\/a> by Guive Assadi<\/li>\n<li><a href=\"https:\/\/arxiv.org\/abs\/2303.16200\">Natural Selection Favors AIs over Humans<\/a> by Dan Hendrycks<\/li>\n<li><a href=\"https:\/\/arxiv.org\/pdf\/2306.06924\">TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI<\/a> by Andrew Critch and Stuart Russell<\/li>\n<li>Our interview with Carl Shulman on <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/carl-shulman-economy-agi\/\">the economy and national security after AGI<\/a>, which talks about why humanity seems likely to hand over more control to AI systems<\/li>\n<li>Our interview with Will MacAskill on <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/will-macaskill-century-in-a-decade-navigating-intelligence-explosion\/\">AI causing a &#8220;century in a decade&#8221; \u2014 and how we&#8217;re completely unprepared<\/a><\/li>\n<li>Our interview with Tom Davidson on <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/tom-davidson-ai-enabled-human-power-grabs\/\">how AI-enabled coups could allow a tiny group to seize power<\/a><\/li>\n<li>Our article on <a href=\"https:\/\/80000hours.org\/articles\/future-generations\/\">longtermism<\/a><\/li>\n<\/ul>\n<p>We&#8217;ve also provided a more general argument for being concerned about AI&#8217;s effects on society <a href=\"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/?v=1\">here<\/a>.<\/p>\n<div class=\"tw--mt-6 tw--p-3 tw--pt-2 tw--bg-gray-lighter tw--rounded-md \">\n<h3 class=\"no-toc\">\t\t<a class=\"no-visited-styling tw--text-off-black hover:tw--text-off-black hover:tw--no-underline focus:tw--text-off-black\" href=\"https:\/\/80000hours.org\/problem-profiles\/\">\t\t\t<small>Read next:&nbsp;<\/small>\t\t\tExplore other pressing world problems\t\t<\/a>\t<\/h3>\n<div class=\"tw--grid xs:tw--grid-flow-col tw--gap-3\">\n<div class=\"xs:tw--order-last tw--pt-1\">\t\t\t<a href=\"https:\/\/80000hours.org\/problem-profiles\/\">\t\t\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/10\/sea-ocean-sky-night-cosmos-view-826635-pxhere.com_-720x448.jpg\" alt=\"Decorative post preview\" width=\"720\" height=\"448\">\t\t\t<\/a>\t\t<\/div>\n<div class=\"\">\n<div class=\"tw--pb-3\">\n<p>Want to learn more about global issues we think are especially pressing? See our list of issues that are large in scale, solvable, and neglected, according to our research.<\/p>\n<\/div>\n<div class=\"\">\t\t\t\t<a href=\"https:\/\/80000hours.org\/problem-profiles\/\" class=\"btn btn-primary\">Continue &rarr;<\/a>\t\t\t<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"well visible-if-not-newsletter-subscriber margin-bottom margin-top padding-top-small padding-bottom-small\">\n<h3 class=\"no-toc\">Plus, join our newsletter and we&#8217;ll mail you a free book<\/h3>\n<p>Join our newsletter and we&#8217;ll send you a free copy of <em>The Precipice<\/em> \u2014 a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. <a href=\"https:\/\/80000hours.org\/free-book\/#giveaway-terms\">T&#038;Cs here<\/a>.<\/p>\n<form data-80k-object-id=\"\" data-80k-form-action=\"newsletter__subscribe\" action=\"\/\" method=\"post\" class=\"form-newsletter-signup form-newsletter-signup-step-1 margin-bottom-smaller\">\n<div class=\"mc-field-group input-group compact-input-group \"> <input type=\"email\" value=\"\" name=\"email\" required class=\"form-control email\" placeholder=\"Email address\" id=\"input_email\" pattern=\"[a-zA-Z0-9._+\\-]+@(?!(gmial\\.com|gnail\\.com|gmai\\.com|gmal\\.com|gmali\\.com|gamil\\.com|gmail\\.co|gmail\\.con|gmail\\.om|yahooo\\.com|yaho\\.com|outlok\\.com|outloo\\.com|hotmial\\.com|hotmail\\.con|hmail\\.com|yopmail\\.com|discardmail\\.com)$)[a-zA-Z0-9.\\-]+\\.[a-zA-Z]{2,}\" title=\"Please enter a valid email address (e.g. user@example.com)\" > <span class=\"submit input-group-btn input-group-btn-right\"> <input type=\"submit\" id=\"mc-embedded-subscribe\" value=\"GET THE BOOK\" class=\"btn btn-primary \" \/> <\/span> <\/div>\n<div> <input name=\"_eightyk_action\" value=\"mailchimp_add_subscriber\" type=\"hidden\"> <input name=\"redirect_path_after_step_2\" value=\"\/newsletter\/welcome\/\" type=\"hidden\"> <\/div>\n<div style=\"position: absolute; left: -5000px;\"> <input type=\"text\" name=\"b_abc12f58bbe8075560abdc5b7_43bc1ae55c\" tabindex=\"-1\" value=\"\"> <\/div>\n<\/form>\n<\/div>\n","protected":false},"author":435,"featured_media":89317,"parent":0,"menu_order":0,"template":"","meta":{"_acf_changed":false,"footnotes":"[fn roughly]There are many cases where societal systems produce outcomes that are clearly bad for many humans, such as carrying out wars or causing harmful pollution. But overall, humanity has so far been able to greatly expand its population, become richer, and extend the average life span because societal systems tend to serve human interests on net. [\/fn]\r\n\r\n[fn OP]Open Philanthropy is the largest funder of 80,000 Hours.[\/fn]"},"categories":[1353,368,1391,291],"class_list":["post-89316","problem_profile","type-problem_profile","status-publish","has-post-thumbnail","hentry","category-ai","category-existential-risk","category-research-in-relevant-areas","category-world-problems"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Gradual disempowerment | 80,000 Hours<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Gradual disempowerment | 80,000 Hours\" \/>\n<meta property=\"og:description\" content=\"The proliferation of advanced AI systems may lead to the gradual disempowerment of humanity, even if efforts to prevent them from becoming power-seeking or scheming are successful. Humanity may be incentivised to hand over increasing amounts of control to AIs, giving them power over the economy, politics, culture, and more. Over time, humanity&#039;s interests may be sidelined and our control over the future undermined, potentially constituting an existential catastrophe. There&#039;s disagreement over how serious a problem this is and how it relates to [other concerns about AI alignment](https:\/\/80000hours.o\" \/>\n<meta property=\"og:url\" content=\"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/\" \/>\n<meta property=\"og:site_name\" content=\"80,000 Hours\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/80000Hours\" \/>\n<meta property=\"article:modified_time\" content=\"2026-03-18T17:52:39+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/03\/\u05e2\u05e5_\u05e2\u05dc_\u05d0\u05d9_\u05de\u05dc\u05d7_\u05d1\u05d0\u05de\u05e6\u05e2_\u05d9\u05dd_\u05d4\u05de\u05dc\u05d7.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2100\" \/>\n\t<meta property=\"og:image:height\" content=\"1560\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@80000hours\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/gradual-disempowerment\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/gradual-disempowerment\\\/\"},\"author\":{\"name\":\"Cody Fenwick\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/person\\\/75ac20bab88e70f659caa92bc64fd2cc\"},\"headline\":\"Gradual disempowerment\",\"datePublished\":\"2025-04-04T01:45:05+00:00\",\"dateModified\":\"2026-03-18T17:52:39+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/gradual-disempowerment\\\/\"},\"wordCount\":1656,\"publisher\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/gradual-disempowerment\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/\u05e2\u05e5_\u05e2\u05dc_\u05d0\u05d9_\u05de\u05dc\u05d7_\u05d1\u05d0\u05de\u05e6\u05e2_\u05d9\u05dd_\u05d4\u05de\u05dc\u05d7.jpg\",\"articleSection\":[\"AI\",\"Existential risk\",\"Research in relevant areas\",\"World problems\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/gradual-disempowerment\\\/\",\"url\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/gradual-disempowerment\\\/\",\"name\":\"Gradual disempowerment | 80,000 Hours\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/gradual-disempowerment\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/gradual-disempowerment\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/\u05e2\u05e5_\u05e2\u05dc_\u05d0\u05d9_\u05de\u05dc\u05d7_\u05d1\u05d0\u05de\u05e6\u05e2_\u05d9\u05dd_\u05d4\u05de\u05dc\u05d7.jpg\",\"datePublished\":\"2025-04-04T01:45:05+00:00\",\"dateModified\":\"2026-03-18T17:52:39+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/gradual-disempowerment\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/gradual-disempowerment\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/gradual-disempowerment\\\/#primaryimage\",\"url\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/\u05e2\u05e5_\u05e2\u05dc_\u05d0\u05d9_\u05de\u05dc\u05d7_\u05d1\u05d0\u05de\u05e6\u05e2_\u05d9\u05dd_\u05d4\u05de\u05dc\u05d7.jpg\",\"contentUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/\u05e2\u05e5_\u05e2\u05dc_\u05d0\u05d9_\u05de\u05dc\u05d7_\u05d1\u05d0\u05de\u05e6\u05e2_\u05d9\u05dd_\u05d4\u05de\u05dc\u05d7.jpg\",\"width\":2100,\"height\":1560,\"caption\":\"By [Eran Raznik](https:\\\/\\\/es.wikipedia.org\\\/wiki\\\/Mar_Muerto#\\\/media\\\/Archivo:%D7%A2%D7%A5_%D7%A2%D7%9C_%D7%90%D7%99_%D7%9E%D7%9C%D7%97_%D7%91%D7%90%D7%9E%D7%A6%D7%A2_%D7%99%D7%9D_%D7%94%D7%9E%D7%9C%D7%97.jpg) CC BY-SA 4.0\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/gradual-disempowerment\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/80000hours.org\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Gradual disempowerment\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#website\",\"url\":\"https:\\\/\\\/80000hours.org\\\/\",\"name\":\"80,000 Hours\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/80000hours.org\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\",\"name\":\"80,000 Hours\",\"url\":\"https:\\\/\\\/80000hours.org\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2018\\\/07\\\/og-logo_0.png\",\"contentUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2018\\\/07\\\/og-logo_0.png\",\"width\":1500,\"height\":785,\"caption\":\"80,000 Hours\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/80000Hours\",\"https:\\\/\\\/x.com\\\/80000hours\",\"https:\\\/\\\/www.youtube.com\\\/user\\\/eightythousandhours\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/person\\\/75ac20bab88e70f659caa92bc64fd2cc\",\"name\":\"Cody Fenwick\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/b15399901921a0324ca7f860d98f72339b991688812084f26d8dd50d5ec79aa5?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/b15399901921a0324ca7f860d98f72339b991688812084f26d8dd50d5ec79aa5?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/b15399901921a0324ca7f860d98f72339b991688812084f26d8dd50d5ec79aa5?s=96&d=mm&r=g\",\"caption\":\"Cody Fenwick\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/cody.fenwick\",\"https:\\\/\\\/www.linkedin.com\\\/in\\\/cody-fenwick-8073089b\\\/\",\"https:\\\/\\\/x.com\\\/codytfenwick\"],\"url\":\"https:\\\/\\\/80000hours.org\\\/author\\\/cody-fenwick\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Gradual disempowerment | 80,000 Hours","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/","og_locale":"en_US","og_type":"article","og_title":"Gradual disempowerment | 80,000 Hours","og_description":"The proliferation of advanced AI systems may lead to the gradual disempowerment of humanity, even if efforts to prevent them from becoming power-seeking or scheming are successful. Humanity may be incentivised to hand over increasing amounts of control to AIs, giving them power over the economy, politics, culture, and more. Over time, humanity&#039;s interests may be sidelined and our control over the future undermined, potentially constituting an existential catastrophe. There&#039;s disagreement over how serious a problem this is and how it relates to [other concerns about AI alignment](https:\/\/80000hours.o","og_url":"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/","og_site_name":"80,000 Hours","article_publisher":"https:\/\/www.facebook.com\/80000Hours","article_modified_time":"2026-03-18T17:52:39+00:00","og_image":[{"width":2100,"height":1560,"url":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/03\/\u05e2\u05e5_\u05e2\u05dc_\u05d0\u05d9_\u05de\u05dc\u05d7_\u05d1\u05d0\u05de\u05e6\u05e2_\u05d9\u05dd_\u05d4\u05de\u05dc\u05d7.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_site":"@80000hours","twitter_misc":{"Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/#article","isPartOf":{"@id":"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/"},"author":{"name":"Cody Fenwick","@id":"https:\/\/80000hours.org\/#\/schema\/person\/75ac20bab88e70f659caa92bc64fd2cc"},"headline":"Gradual disempowerment","datePublished":"2025-04-04T01:45:05+00:00","dateModified":"2026-03-18T17:52:39+00:00","mainEntityOfPage":{"@id":"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/"},"wordCount":1656,"publisher":{"@id":"https:\/\/80000hours.org\/#organization"},"image":{"@id":"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/#primaryimage"},"thumbnailUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/03\/\u05e2\u05e5_\u05e2\u05dc_\u05d0\u05d9_\u05de\u05dc\u05d7_\u05d1\u05d0\u05de\u05e6\u05e2_\u05d9\u05dd_\u05d4\u05de\u05dc\u05d7.jpg","articleSection":["AI","Existential risk","Research in relevant areas","World problems"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/","url":"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/","name":"Gradual disempowerment | 80,000 Hours","isPartOf":{"@id":"https:\/\/80000hours.org\/#website"},"primaryImageOfPage":{"@id":"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/#primaryimage"},"image":{"@id":"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/#primaryimage"},"thumbnailUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/03\/\u05e2\u05e5_\u05e2\u05dc_\u05d0\u05d9_\u05de\u05dc\u05d7_\u05d1\u05d0\u05de\u05e6\u05e2_\u05d9\u05dd_\u05d4\u05de\u05dc\u05d7.jpg","datePublished":"2025-04-04T01:45:05+00:00","dateModified":"2026-03-18T17:52:39+00:00","breadcrumb":{"@id":"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/#primaryimage","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/03\/\u05e2\u05e5_\u05e2\u05dc_\u05d0\u05d9_\u05de\u05dc\u05d7_\u05d1\u05d0\u05de\u05e6\u05e2_\u05d9\u05dd_\u05d4\u05de\u05dc\u05d7.jpg","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/03\/\u05e2\u05e5_\u05e2\u05dc_\u05d0\u05d9_\u05de\u05dc\u05d7_\u05d1\u05d0\u05de\u05e6\u05e2_\u05d9\u05dd_\u05d4\u05de\u05dc\u05d7.jpg","width":2100,"height":1560,"caption":"By [Eran Raznik](https:\/\/es.wikipedia.org\/wiki\/Mar_Muerto#\/media\/Archivo:%D7%A2%D7%A5_%D7%A2%D7%9C_%D7%90%D7%99_%D7%9E%D7%9C%D7%97_%D7%91%D7%90%D7%9E%D7%A6%D7%A2_%D7%99%D7%9D_%D7%94%D7%9E%D7%9C%D7%97.jpg) CC BY-SA 4.0"},{"@type":"BreadcrumbList","@id":"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/80000hours.org\/"},{"@type":"ListItem","position":2,"name":"Gradual disempowerment"}]},{"@type":"WebSite","@id":"https:\/\/80000hours.org\/#website","url":"https:\/\/80000hours.org\/","name":"80,000 Hours","description":"","publisher":{"@id":"https:\/\/80000hours.org\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/80000hours.org\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/80000hours.org\/#organization","name":"80,000 Hours","url":"https:\/\/80000hours.org\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","width":1500,"height":785,"caption":"80,000 Hours"},"image":{"@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/80000Hours","https:\/\/x.com\/80000hours","https:\/\/www.youtube.com\/user\/eightythousandhours"]},{"@type":"Person","@id":"https:\/\/80000hours.org\/#\/schema\/person\/75ac20bab88e70f659caa92bc64fd2cc","name":"Cody Fenwick","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/b15399901921a0324ca7f860d98f72339b991688812084f26d8dd50d5ec79aa5?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/b15399901921a0324ca7f860d98f72339b991688812084f26d8dd50d5ec79aa5?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/b15399901921a0324ca7f860d98f72339b991688812084f26d8dd50d5ec79aa5?s=96&d=mm&r=g","caption":"Cody Fenwick"},"sameAs":["https:\/\/www.facebook.com\/cody.fenwick","https:\/\/www.linkedin.com\/in\/cody-fenwick-8073089b\/","https:\/\/x.com\/codytfenwick"],"url":"https:\/\/80000hours.org\/author\/cody-fenwick\/"}]}},"_links":{"self":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile\/89316","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile"}],"about":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/types\/problem_profile"}],"author":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/users\/435"}],"version-history":[{"count":3,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile\/89316\/revisions"}],"predecessor-version":[{"id":95495,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile\/89316\/revisions\/95495"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media\/89317"}],"wp:attachment":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media?parent=89316"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/categories?post=89316"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}