{"id":94492,"date":"2026-02-24T06:48:11","date_gmt":"2026-02-24T06:48:11","guid":{"rendered":"https:\/\/80000hours.org\/?post_type=problem_profile&#038;p=94492"},"modified":"2026-04-22T16:56:25","modified_gmt":"2026-04-22T16:56:25","slug":"artificial-intelligence","status":"publish","type":"problem_profile","link":"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/","title":{"rendered":"Advanced AI poses the world&#8217;s most pressing problems. Here&#8217;s&nbsp;why."},"content":{"rendered":"<div id=\"toc_container\" class=\"toc_white no_bullets\"><p class=\"toc_title\">Table of Contents<\/p><ul class=\"toc_list\"><li><a href=\"#why-we-think-advanced-ai-poses-the-worlds-most-pressing-problems\"><span class=\"toc_number toc_depth_1\">1<\/span> Why we think advanced AI poses the world&#8217;s most pressing problems<\/a><ul><li><a href=\"#1-ai-could-replace-human-labour-in-the-most-economically-valuable-fields\"><span class=\"toc_number toc_depth_2\">1.1<\/span> 1. AI could replace human labour in the most economically valuable fields<\/a><\/li><li><a href=\"#2-replacing-this-much-human-labour-could-trigger-the-next-radical-transformation-of-society\"><span class=\"toc_number toc_depth_2\">1.2<\/span> 2. Replacing this much human labour could trigger the next radical transformation of society<\/a><\/li><li><a href=\"#3-this-transformation-could-be-extremely-rapid-and-dramatic\"><span class=\"toc_number toc_depth_2\">1.3<\/span> 3. This transformation could be extremely rapid and dramatic<\/a><\/li><li><a href=\"#4-a-rapid-ai-driven-transformation-would-raise-a-range-of-major-challenges-including-existential-risks\"><span class=\"toc_number toc_depth_2\">1.4<\/span> 4. A rapid, AI-driven transformation would raise a range of major challenges, including existential risks<\/a><\/li><li><a href=\"#5-work-on-these-problems-is-tractable-but-neglected\"><span class=\"toc_number toc_depth_2\">1.5<\/span> 5. Work on these problems is tractable but neglected<\/a><\/li><\/ul><\/li><li><a href=\"#objections-and-replies\"><span class=\"toc_number toc_depth_1\">2<\/span> Objections and replies<\/a><\/li><li><a href=\"#whats-next\"><span class=\"toc_number toc_depth_1\">3<\/span> What&#8217;s next?<\/a><ul><li><a href=\"#want-one-on-one-advice-on-pursuing-this-path\"><span class=\"toc_number toc_depth_2\">3.1<\/span> Want one-on-one advice on pursuing this path?<\/a><\/li><\/ul><\/li><li><a href=\"#learn-more\"><span class=\"toc_number toc_depth_1\">4<\/span> Learn more<\/a><\/li><li><a href=\"#acknowledgements\"><span class=\"toc_number toc_depth_1\">5<\/span> Acknowledgements<\/a><\/li><\/ul><\/div>\n<h2><span id=\"why-we-think-advanced-ai-poses-the-worlds-most-pressing-problems\" class=\"toc-anchor\"><\/span>Why we think advanced AI poses the world&#8217;s most pressing problems<\/h2>\n<p>For over a decade, we&#8217;ve looked into the biggest problems in the world and approaches to solving them. We think the cluster of challenges raised by advanced AI are the most pressing problems facing humanity today \u2014 given their scale and the promising but neglected opportunities to address them.<\/p>\n<p>Our concern about AI risks isn&#8217;t a reaction to the surge of interest in AI since the 2022 release of ChatGPT. We&#8217;ve argued that AI could pose catastrophic risks since 2016, and others raised related concerns long before then (<a href=\"https:\/\/gwern.net\/doc\/ai\/1951-turing.pdf\">1<\/a>, <a href=\"https:\/\/web.archive.org\/web\/20260224060832\/https:\/\/www.historyofinformation.com\/detail.php?id=2142\">2<\/a>, <a href=\"https:\/\/intelligence.org\/files\/CFAI.pdf\">3<\/a>, <a href=\"https:\/\/web.archive.org\/web\/20250121235337\/https:\/\/www.amazon.co.uk\/Singularity-Near-Raymond-Kurzweil\/dp\/0715635611\">4<\/a>).<\/p>\n<p>In short, we think it&#8217;s plausible advanced AI could radically transform the world. This could pose extreme challenges for humanity, and it presents a potentially unique opportunity for having a positive impact.<\/p>\n<p>We go through the specific challenges we think are most pressing in our <a href=\"\/problem-profiles\/\">problem profiles<\/a>. This article explains why we think advanced AI gives rise to such important issues <em>in general<\/em>.<\/p>\n<p>There are a lot of arguments you could make here \u2014 like the argument that advanced AI will constitute a <a href=\"https:\/\/web.archive.org\/web\/20260115164944\/https:\/\/wiki.aiimpacts.org\/arguments_for_ai_risk\/list_of_arguments_that_ai_poses_an_xrisk\/second_species_argument_for_ai_xrisk\">&#8220;second species&#8221;<\/a> or that AI will make the 21st century <a href=\"https:\/\/web.archive.org\/web\/20260114050815\/https:\/\/www.cold-takes.com\/most-important-century\/\">&#8220;the most important century&#8221;<\/a> for humanity.<\/p>\n<p>But here&#8217;s the version of the argument we feel most compelled by:<\/p>\n<ol>\n<li>AI could replace human labour in some of the most economically valuable fields. <\/li>\n<li>Replacing human labour in these fields could trigger the next radical transformation of society.<\/li>\n<li>This transformation could be extremely rapid and dramatic, especially if there are fast feedback loops in AI R&amp;D.<\/li>\n<li>A rapid, AI-driven transformation would raise a range of major challenges, including existential risks.<\/li>\n<li>Work on these challenges is <a href=\"\/career-guide\/most-pressing-problems\/\">tractable but neglected<\/a>.<\/li>\n<\/ol>\n<p>We&#8217;ll argue for each of these claims below.<\/p>\n<p>To be clear, not <em>all<\/em> existential-scale issues with advanced AI need to route through this argument. Even without automating lots of human labour, AI systems could still cause enormous harm. For example, malicious actors could use AI to design novel bioweapons or carry out sophisticated cyberattacks. That in itself could be enough reason to work on specific AI risks.<\/p>\n<p>But the story we tell below \u2014 with the world rapidly being transformed through widespread automation \u2014 is the most wide-ranging case we know of for prioritising AI risks more generally. It explains why a variety of unprecedented challenges could emerge around the same time, and why we might not have much time to prepare.<\/p>\n<h3><span id=\"1-ai-could-replace-human-labour-in-the-most-economically-valuable-fields\" class=\"toc-anchor\"><\/span>1. AI could replace human labour in the most economically valuable fields<\/h3>\n<p>Many technologies \u2014 like cryptocurrency, NFTs, the &#8216;internet of things,&#8217; fusion, and quantum computing \u2014 have been overhyped. People often have high expectations of how much a new innovation will change the world, and reality sometimes falls short.<\/p>\n<p>But we think AI is going to be different.<\/p>\n<p>That&#8217;s because, unlike other technologies, AI has the potential to compete with \u2014 and even go beyond \u2014 human intelligence. And that means it could replace and reproduce the main driver of progress in our history: flexible human labour.<\/p>\n<p>Some technologies, like ATMs, mimic extremely limited forms of human labour. Others, like steam engines and computers, also amplify what humans can do. But <strong>the idea behind artificial intelligence is that it&#8217;ll be able to do <em>almost any<\/em> work humans can do \u2014 and do so mostly autonomously<\/strong>.<\/p>\n<p>ATMs didn&#8217;t make all the bank tellers unemployed because there were other tasks the humans could easily shift into. But imagine an ATM that could not only hand out cash, but also manage the bank&#8217;s IT systems, contribute to company strategy, and give customers tailored financial advice. Imagine it could do this mostly without our help, and more cheaply than human workers would. If that were the case, it&#8217;s not clear why the bank would keep humans employed <em>at all<\/em>.<\/p>\n<p>Now suppose the same system that did all this for the bank could also do equivalent work for tech companies, scientific research labs, consultancy firms, think tanks, <em>The New York Times<\/em>, the US government, and so on.<\/p>\n<p>That&#8217;s the prospect raised by AI.<\/p>\n<p>We&#8217;re already seeing glimpses of AI&#8217;s increasingly general ability to do human work. AI systems today can do things that just a decade ago would&#8217;ve been astonishing.<\/p>\n<p>For example, consider the rapid progress language models have made on the <a href=\"https:\/\/epoch.ai\/benchmarks\/gpqa-diamond\">GPQA<\/a> benchmark, which asks challenging, PhD-level questions about <strong>chemistry, physics, and biology<\/strong>. In mid-2023, frontier AI performance was just slightly better than random guesswork on these questions. But since early 2025, many models have been outperforming human experts \u2014 and sometimes by a large margin.<\/p>\n<figure id=\"attachment_94499\" aria-describedby=\"caption-attachment-94499\" style=\"width: 700px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/benchmarks-1024x576.png\" alt=\"A chart from Epoch AI showing different AI agents&#039; performances on a set of Ph.D-level science questions, with performance ranging from 13% accuracy to 93% accuracy. Over time, performance has increased, with most agents released since January 2025 outperforming humans (who sit at an average 70% accuracy).\" class=\"size-large wp-image-94499\" srcset=\"https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/benchmarks-1024x576.png 1024w, https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/benchmarks-300x169.png 300w, https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/benchmarks-768x432.png 768w, https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/benchmarks-1536x864.png 1536w, https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/benchmarks.png 1920w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption id=\"caption-attachment-94499\" class=\"wp-caption-text\">&#8220;AI Performance on a set of Ph.D.-level science questions&#8221; from <a href=\"https:\/\/epoch.ai\/benchmarks\/gpqa-diamond\">Epoch AI<\/a><\/figcaption><\/figure>\n<p>They&#8217;ve also <a href=\"https:\/\/epoch.ai\/benchmarks\/swe-bench-verified\">shown impressive improvement<\/a> on <strong>software engineering tasks<\/strong>. For example, Anthropic&#8217;s agentic coding tool Claude Code <a href=\"https:\/\/www.axios.com\/2026\/01\/07\/anthropics-claude-code-vibe-coding\">enables users to build applications by describing what they want<\/a> \u2014 even if they have no coding experience.<\/p>\n<p>A senior engineer at Google reported that Claude Code <a href=\"https:\/\/the-decoder.com\/google-engineer-says-claude-code-built-in-one-hour-what-her-team-spent-a-year-on\/\">took one hour to generate a prototype of a system her team had spent a year exploring approaches to building<\/a>. And Anthropic built its &#8216;Cowork&#8217; product (a more user-friendly version of Claude Code for non-developers) in under two weeks by <a href=\"https:\/\/www.youtube.com\/watch?v=FMfoR1h8axc\">getting Claude Code to write most of the code<\/a>.<\/p>\n<p>Current AI systems can also:<\/p>\n<ul>\n<li><strong>Predict complex biomolecular structures and interactions<\/strong>: Google DeepMind&#8217;s <a href=\"https:\/\/blog.google\/technology\/ai\/google-deepmind-isomorphic-alphafold-3-ai-model\/\">AlphaFold 3<\/a> \u2014 a successor to a <a href=\"https:\/\/www.nature.com\/articles\/d41586-024-03214-7\">Nobel Prize-winning AI system<\/a> \u2014 can predict how proteins interact with DNA, RNA, and other structures at the molecular level.<\/li>\n<li><strong>Solve hard maths problems competitively<\/strong>: Multiple AI models have reportedly <a href=\"https:\/\/www.newscientist.com\/article\/2489248-deepmind-and-openai-claim-gold-in-international-mathematical-olympiad\/\">achieved gold medal performance in the International Mathematical Olympiad<\/a>. Separately, when 30 top mathematicians were challenged to devise problems they believed AI wouldn&#8217;t be able to solve, OpenAI&#8217;s o4 mini <a href=\"https:\/\/www.scientificamerican.com\/article\/inside-the-secret-meeting-where-mathematicians-struggled-to-outsmart-ai\/\">thwarted many of their best attempts<\/a> \u2014 even solving one PhD-level question in about ten minutes.<\/li>\n<li><strong>Improve robotics<\/strong>: Many leading robotics models are now AI-driven. For example, Boston Dynamics is <a href=\"https:\/\/www.wired.com\/story\/google-boston-dynamics-gemini-powered-robot-atlas\/\">enhancing its Atlas robots with Google DeepMind AI<\/a> to help them better understand and manipulate their environments. These robots will <a href=\"https:\/\/bostondynamics.com\/blog\/boston-dynamics-unveils-new-atlas-robot-to-revolutionize-industry\/\">be deployed for industrial work at Hyundai factories<\/a>. <\/li>\n<li><strong>Carry out extended tasks independently on your computer<\/strong>: Unlike earlier models that could only generate text, new &#8216;agentic&#8217; AIs like <a href=\"https:\/\/claude.com\/product\/claude-code\">Claude Code<\/a> and <a href=\"https:\/\/openai.com\/index\/introducing-gpt-5-2-codex\">OpenAI&#8217;s Codex<\/a> can now use many tools on your computer, execute code, search the web, and chain together multiple steps \u2014 allowing them to complete extended, real-world tasks with far less human involvement. <\/li>\n<li><strong>Help with AI development itself<\/strong>: There&#8217;s also evidence that <a href=\"https:\/\/metr.org\/AI_R_D_Evaluation_Report.pdf\">AI systems can outperform humans in AI R&amp;D tasks<\/a>, at least when limited to a two-hour time window.<\/li>\n<li>And much more.<\/li>\n<\/ul>\n<p>There are still plenty of things AI systems can&#8217;t reliably do \u2014 especially work that takes days to complete \u2014 but the list of things these systems can&#8217;t do is diminishing, and <a href=\"\/ai\/guide\/when-will-agi-arrive\/#i-whats-driven-recent-ai-progress-and-will-it-continue\">the pace of AI progress has been impressive<\/a>.<\/p>\n<p>Even with the range of capabilities they have <em>today<\/em>, it seems clear that AI systems could have considerable effects on society. At the very least, automating the specific tasks that AI is already good at \u2014 for example, in software engineering, biochemistry, and robotics \u2014 will speed up some areas of scientific progress and contribute to economic growth.<\/p>\n<p>But we expect that AIs will become much more widely capable than they are today, and have far more transformative effects. A common adage in the industry is <a href=\"https:\/\/www.linkedin.com\/posts\/emollick_todays-ai-is-the-worst-ai-you-will-ever-activity-7106305750431322112-Xr7n\/&amp;sa=D&amp;source=docs&amp;ust=1771598071868267&amp;usg=AOvVaw0kH9PaLziG6b4w4DoP5T6k\">&#8220;today&#8217;s AI is the worst AI you will ever use.&#8221;<\/a><\/p>\n<p>In fact, <a href=\"https:\/\/aiimpacts.org\/wp-content\/uploads\/2023\/04\/Thousands_of_AI_authors_on_the_future_of_AI.pdf\">many people in the field<\/a> think that AI will get good enough to do essentially <em>anything<\/em> that humans can do \u2014 and more.<\/p>\n<p>One milestone here would be developing <strong>artificial general intelligence (AGI)<\/strong>. People use this term in many different ways, but we&#8217;ll use it to describe AI systems that can compete with humans on almost all cognitive tasks, or at least the most <em>economically valuable<\/em> ones \u2014 think advanced scientific research, designing new technologies and products, running businesses, consulting, and so on. This is the kind of system leading AI companies are <a href=\"https:\/\/openai.com\/our-structure\/\">actively trying to build<\/a>, and they&#8217;re funnelling billions of dollars into being the first to get there.<\/p>\n<p>Looking at recent trends in AI development, we think it&#8217;s surprisingly plausible (though far from guaranteed) that we&#8217;ll get this sort of AGI <a href=\"\/ai\/guide\/when-will-agi-arrive\/\">within the next decade<\/a>.<\/p>\n<p>But it probably won&#8217;t stop there. There&#8217;s no reason to think that humans represent the ceiling of mental ability \u2014 so eventually, AI could <em>greatly exceed<\/em> human performance on many (if not all) cognitive tasks. Plausibly, they could even do work that&#8217;s as far beyond human abilities as calculus is beyond chimpanzee abilities.<\/p>\n<p>It also might not take long before society makes giant advances in robotics. Although today&#8217;s robots are very rudimentary, <a href=\"https:\/\/www.physicalintelligence.company\/blog\/pi0\">they&#8217;re improving<\/a>. And as our AIs get cognitively smarter, they&#8217;ll also get better at both controlling robotic limbs and designing them. This means AI systems might quickly become able to outperform humans on any <em>physical<\/em> task as well.<\/p>\n<p>Over the next few sections, we&#8217;ll explain how the advanced AIs of the future could transform society and present serious risks.<\/p>\n<p>Our argument focuses on the prospect that humanity develops AGI or something similar. This isn&#8217;t the only important milestone (see below). But we think that if AI can match human abilities at the cognitive tasks that most drive innovation and economic production, that&#8217;s likely enough to enable the rapid progress we describe in the following sections. And if AI becomes even more impressive than this \u2014 which we think is probable \u2014 the effects could be even more dramatic.<\/p>\n<div class=\"well bg-gray-lighter margin-bottom margin-top padding-top-small padding-bottom-small\">\n<h4>Could less advanced AI systems still pose existential risks?<\/h4>\n<p>In short: yes, we think so.<\/p>\n<p>In this article, we&#8217;re focusing on AI systems that are very skilled at a wide range of tasks. That&#8217;s because we think systems like this pose the <em>highest<\/em> and <em>most obvious<\/em> chance of transforming society and throwing up many extremely serious risks.<\/p>\n<p>But we don&#8217;t think this is the <em>only<\/em> milestone in AI capabilities progress worth worrying about. For example:<\/p>\n<ul>\n<li>Even narrowly capable AI tools could be used to cause serious harm. An AI that excels at biotechnology research could make it easier for people to develop dangerous pathogens \u2014 regardless of whether it can also trade stocks or carry out business strategies. An AI that is only useful for launching powerful cyberattacks could still shift the global balance of power. And so on. <\/li>\n<li>We might face rapid, destabilising changes to society <em>in the lead up<\/em> to developing AGI, not just <em>after it arrives<\/em>. As AI gradually automates more and more tasks, we could see increasing levels of disruption across the economy. As we&#8217;ll discuss <a href=\"#3-This-transformation-could-be-extremely-rapid-and-dramatic\">later on<\/a>, AI systems starting to automate AI R&#038;D itself could be especially disruptive, introducing dramatic feedback loops in AI progress. <\/li>\n<\/ul>\n<p>This could be enough reason to prioritise working on AI risks now, even if you don&#8217;t think we&#8217;ll get AGI any time soon.<\/p>\n<\/div>\n<h3><span id=\"2-replacing-this-much-human-labour-could-trigger-the-next-radical-transformation-of-society\" class=\"toc-anchor\"><\/span>2. Replacing this much human labour could trigger the next radical transformation of society<\/h3>\n<p>So what would it mean if AI systems could outperform humans on such a wide range of tasks?<\/p>\n<p>The first thing people often think of here is widespread unemployment. This is a serious possibility, and would have severe consequences for society. But we think focusing on it is actually missing an even bigger story.<\/p>\n<p>A world with machines that can replace this much human labour would look so dramatically different that it can be hard to imagine.<\/p>\n<p>For some sense of comparison, think of how different the world is today to how it was for our ancestors 200 years ago, 2,000 years ago, or 20,000 years ago. The worlds before electricity, or the printing press, or agriculture literally looked quite different, and they had entirely different ways of life.<\/p>\n<p>With each of these major breakthroughs in technology, the world has been transformed.<\/p>\n<p>Take the first Agricultural Revolution. Before agriculture, humans were mostly hunter-gatherers and often lived in small bands. The development of farming technologies like ploughs allowed us to produce far more food per person, leading to the first cities. And, to an increasing extent, some people could specialise in tasks other than finding food \u2014 which allowed humans to invent metalwork, writing, and early governance systems.<\/p>\n<p>The Industrial Revolution followed a similar pattern. The arrival of technologies like the steam engine <a href=\"https:\/\/www.britannica.com\/money\/productivity\/Historical-trends\">dramatically increased productivity<\/a> \u2014 and sparked innovations in manufacturing and communication. Once again, this led to radical changes in how humans live: goods that were once luxury items became available to ordinary people, railways connected distant cities, and huge swathes of people shifted from rural to urban life.<\/p>\n<p>What&#8217;s going on here?<\/p>\n<p>Each period of transformation in history has its own complex story, and there are competing theories about what drove them. But one popular explanation says we keep seeing the same rough pattern: powerful new technology both enables us to sustain larger populations and lets people do more with the same bodies and minds. This means more human labour and greater productivity \u2014 which has compounding effects, as it leads to a wave of even <em>further<\/em> innovation.<\/p>\n<p>Since innovation often feeds economic growth, humanity has also become much wealthier in this process. In fact, since the late stages of the Industrial Revolution, we&#8217;ve seen roughly exponential growth in GDP.<\/p>\n<figure id=\"attachment_94596\" aria-describedby=\"caption-attachment-94596\" style=\"width: 700px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"723\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/gdp-world-regions-stacked-area-2-1-1024x723.png\" alt=\" chart from Our World in Data showing GDP by world region from 1820\u20132022, with exponential global GDP growth starting around 1950 and skyrocketing in the 2000s.\" class=\"size-large wp-image-94596\" srcset=\"https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/gdp-world-regions-stacked-area-2-1-1024x723.png 1024w, https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/gdp-world-regions-stacked-area-2-1-300x212.png 300w, https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/gdp-world-regions-stacked-area-2-1-768x542.png 768w, https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/gdp-world-regions-stacked-area-2-1-1536x1084.png 1536w, https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/gdp-world-regions-stacked-area-2-1-2048x1446.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption id=\"caption-attachment-94596\" class=\"wp-caption-text\">Bolt and van Zanden \u2013 Maddison Project Database 2023 \u2013 <a href=\"https:\/\/ourworldindata.org\/grapher\/gdp-world-regions-stacked-area?tab=chart\">Learn more about this data<\/a><\/figcaption><\/figure>\n<p>A common thread in all these stories is that it seems growth has always been <em>reliant on human labour<\/em> \u2014 society has only been able to progress as fast as humans can produce and implement new ideas (that is, new theories, inventions, ways of working, and so on).<\/p>\n<p>But we&#8217;re now on the brink of a new breakthrough.<\/p>\n<p>If future AIs can replace human workers in the most economically valuable fields, we&#8217;ll no longer be so reliant on human labour to sustain these cycles of compounding innovation and wealth. Instead, AI could become the primary driver of progress.<\/p>\n<p>We think this could lead to another transformation of society.<\/p>\n<p>Like other technological breakthroughs, it could enable society to produce far more ideas and far more economic output. But unlike previous technologies, AIs could actually <em>take over<\/em> the processes that most drive innovation and economic production (including the process of designing better AIs). And as we&#8217;ll discuss next, these &#8216;AI workers&#8217; could also have huge advantages over their human counterparts.<\/p>\n<p>This could mean the transformation brought about by AI is <em>extremely<\/em> rapid, and more dramatic than anything we&#8217;ve seen before.<\/p>\n<h3><span id=\"3-this-transformation-could-be-extremely-rapid-and-dramatic\" class=\"toc-anchor\"><\/span>3. This transformation could be extremely rapid and dramatic<\/h3>\n<p>So what could happen as AIs automate more and more of the economy?<\/p>\n<p>At the very least, we expect to see the total amount of labour quickly increase \u2014 since, unlike humans, AI systems <strong>can be easily copied at scale<\/strong>, given enough hardware.<\/p>\n<p>Let&#8217;s say we build an AI that could replace a human engineer. Estimates suggest huge uncertainty here, but running anywhere between thousands and <a href=\"\/podcast\/episodes\/tom-davidson-how-quickly-ai-could-transform-the-world\/#the-interview-begins-000453\">hundreds of millions<\/a> of copies of this AI at once may be feasible, depending on the circumstances.<\/p>\n<p>And this number could grow fast. With efficiency improvements to the algorithms behind these AI workers, we&#8217;ll be able to run a greater number of copies with the same amount of compute. We might also be able to allocate more compute to running copies, by buying more chips or designing more efficient ones. Soon, we could have an AI workforce the size of a significant fraction of the world&#8217;s working-age population.<\/p>\n<p>AI workers could also have other advantages over human workers:<\/p>\n<ul>\n<li>AIs can work much faster than humans, often compressing several days of information processing into minutes.<\/li>\n<li>AIs may be able to coordinate far more efficiently between themselves than humans do \u2014 perhaps at lower costs and <a href=\"https:\/\/singularityhub.com\/2024\/10\/11\/ai-agents-could-collaborate-on-far-grander-scales-than-humans-study-says\/\">greater scales<\/a>.<\/li>\n<li>AIs can become specialised very quickly, with different versions fine-tuned to be exceptionally good at specific tasks. <\/li>\n<\/ul>\n<p>Based on these advantages <em>alone<\/em>, we could soon be seeing unprecedented levels of innovation and economic production as more work is performed by AIs. This could transform society \u2014 for the same reasons automating physical labour did during the Industrial Revolution.<\/p>\n<p>And we think things could be <em>even faster and more dramatic<\/em> than you might expect based on the above.<\/p>\n<p>That&#8217;s because, at some point in this story, we expect AIs will be used to <strong>automate AI research and development itself<\/strong>. And this might trigger an <a href=\"https:\/\/web.archive.org\/web\/20260218092313\/https:\/\/www.forethought.org\/research\/three-types-of-intelligence-explosion\">&#8220;intelligence explosion&#8221;<\/a>: a period of rapid technological progress driven by AI systems that can create better AI systems.<\/p>\n<p>Here&#8217;s how it could unfold:<\/p>\n<ul>\n<li>AI systems become good enough to automate all or most work in AI R&amp;D.<\/li>\n<li>These AI workers help us build better AI systems much faster. <\/li>\n<li>These better systems are then <em>even more useful<\/em> for automating AI R&amp;D, which lets us build <em>even better<\/em> systems, and so on.<\/li>\n<\/ul>\n<p><img decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/AI-cycle-diagram-FINAL-1024x932.png\" alt=\"A cycle with 4 stages, titled &quot;AI-driven intelligence explosion&quot;; The stages are: faster progress in building better AI; more capable AI systems; better automation of AI R&amp;D; AI automates R&amp;D.\" width=\"500\" class=\"aligncenter size-large wp-image-94600\" srcset=\"https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/AI-cycle-diagram-FINAL-1024x932.png 1024w, https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/AI-cycle-diagram-FINAL-300x273.png 300w, https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/AI-cycle-diagram-FINAL-768x699.png 768w, https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/AI-cycle-diagram-FINAL-1536x1399.png 1536w, https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/AI-cycle-diagram-FINAL-2048x1865.png 2048w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p>If this happens, it could create a positive feedback loop in which AI systems get better and better \u2014 possibly over a very short period of time.<\/p>\n<p>And this wouldn&#8217;t just mean building AI systems that are better and better <em>at AI R&amp;D<\/em>. It would mean speeding up improvements to AI capabilities more broadly, giving us increasingly capable and general AI workers to deploy <em>across the wider economy<\/em> \u2014 which, in turn, could accelerate progress in most areas of society.<\/p>\n<h4>What would &#8220;accelerated progress&#8221; look like?<\/h4>\n<p>As we&#8217;ve said, previous periods of transformation in history were ultimately limited by the pace at which humans could produce and implement new ideas (that is, new theories and inventions and ways of working). But now imagine having a vast workforce of AIs that can produce <em>far more<\/em> brilliant ideas than us, <em>much faster<\/em> than we could before \u2014 and act on them more efficiently.<\/p>\n<p>We think this could transform society over a shorter timeframe than we&#8217;ve ever seen.<\/p>\n<p>What would this even look like? For one thing, scientific discoveries could be made at unprecedented speed. The market could suddenly be flooded with new technologies that would otherwise have taken decades to develop. Infrastructure and manufacturing could expand to scales we can barely imagine today. More speculatively, if AI workers are deployed more widely, we could see a surge of fresh ideas \u2014 not just in science and technology, but in art, politics, philosophy, entertainment \u2014 that fundamentally change how we even think about the world.<\/p>\n<p>The world could get much richer, too, since many innovations can increase economic production. In fact, some researchers think an influx of new ideas from AI workers would lead to <a href=\"\/podcast\/episodes\/tom-davidson-how-quickly-ai-could-transform-the-world\/\">&#8220;explosive&#8221; economic growth<\/a> \u2014 and in turn, some of this new wealth could be used to accelerate idea production even <em>further<\/em>.<\/p>\n<p>How quickly could society be transformed, exactly? There are some constraints on how &#8220;explosive&#8221; the trajectory of human progress can get, even with these feedback loops in play. For example:<\/p>\n<ul>\n<li>At some point, we&#8217;ll <a href=\"https:\/\/epochai.substack.com\/i\/162161015\/a-software-singularity-is-unlikely\">hit bottlenecks on AI development<\/a> \u2014 e.g. the availability of compute, energy, or high quality data \u2014 that limit how much better AI workers can get over a short period of time.<\/p>\n<\/li>\n<li>\n<p>In every field, making progress could get increasingly difficult as AI workers quickly exhaust the low-hanging fruit of discoveries and new ideas.<\/p>\n<\/li>\n<\/ul>\n<p>But even after accounting for these effects, some researchers still argue that the effects of AI automation could <a href=\"https:\/\/web.archive.org\/web\/20260219143924\/https:\/\/www.forethought.org\/research\/preparing-for-the-intelligence-explosion#3-the-intelligence-explosion-and-beyond\">compress a century worth of progress into a decade<\/a>.<\/p>\n<p>This level of progress couldn&#8217;t be sustained forever. But the world could <em>already<\/em> have been radically reshaped by the point things slow down \u2014 like how the Industrial Revolution eventually came to an end, but left behind a world that was totally unrecognisable.<\/p>\n<h3><span id=\"4-a-rapid-ai-driven-transformation-would-raise-a-range-of-major-challenges-including-existential-risks\" class=\"toc-anchor\"><\/span>4. A rapid, AI-driven transformation would raise a range of major challenges, including existential risks<\/h3>\n<p>The idea that AGI could supercharge innovation and economic output could be worth celebrating. The world could become extraordinarily rich, and we could rapidly develop new technologies that help us tackle the climate crisis or eradicate diseases.<\/p>\n<p>Indeed, the promise of the technology is one reason why we expect some people to be excited about developing advanced AI systems. As Dario Amodei (CEO of Anthropic) puts it: a big motivator of AGI development is <a href=\"https:\/\/www.darioamodei.com\/essay\/machines-of-loving-grace\">&#8220;a genuinely inspiring vision of the future&#8221;<\/a>.<\/p>\n<p>Generally speaking, fears of emerging technology are often unjustified. Many innovations that have been viewed with suspicion, like vaccines and railways, have ended up being hugely beneficial for humanity.<\/p>\n<p>But in this case, things seem different. For the first time, we&#8217;re designing a whole new population of highly intelligent beings \u2014 agents that can do the most economically valuable things human minds can do, and might not rely on humans to do them.<\/p>\n<p>This introduces complex dynamics we don&#8217;t seem prepared to deal with and don&#8217;t even fully understand. Humans navigating advanced AI could be like toddlers trying to navigate a world of adults, with changes to everything we know \u2014 in science, the economy, geopolitics, and even our ways of life \u2014 happening faster than we can get to grips with them.<\/p>\n<p>Given the uncertainty around how AI development will unfold, it&#8217;s hard to predict exactly <em>what challenges<\/em> we&#8217;ll face. But the ones that seem most worrying to us are:<\/p>\n<ul>\n<li><strong>We&#8217;ll encounter agents that could be much smarter than humans, and might have goals of their own.<\/strong> Those goals might lead them to undermine human interests or even <a href=\"\/problem-profiles\/risks-from-power-seeking-ai\/\">disempower humanity<\/a> if we can&#8217;t control them.<\/li>\n<li><strong>Small groups could gain unprecedented power.<\/strong> If elite groups can control powerful AI, they&#8217;ll be far less reliant on humans to get things done. With a vast AI workforce, they could <a href=\"\/problem-profiles\/extreme-power-concentration\/\">amass previously unseen levels of economic and political influence, or even seize power<\/a> \u2014 and probably wouldn&#8217;t have strong incentives to represent the interests of the broader population. <\/li>\n<li><strong>Dangerous technologies, like <a href=\"\/problem-profiles\/preventing-catastrophic-pandemics\/\">bioweapons<\/a>, could become much more accessible.<\/strong> Access to highly capable AIs could make it much easier to design or get hold of dangerous weapons, significantly lowering the bar for people to cause <a href=\"\/problem-profiles\/catastrophic-ai-misuse\/\">devastating harm to humanity<\/a>. <\/li>\n<li><strong>We may create a large new population of beings whose welfare and interests matter<\/strong>, raising <a href=\"\/problem-profiles\/moral-status-digital-minds\/\">complicated questions<\/a> about how to coexist with them. <\/li>\n<li><strong>These factors may drive conflict and unrest<\/strong>, possibly culminating in a <a href=\"\/problem-profiles\/great-power-conflict\/\">great power war<\/a> or creating other unforeseen challenges.<\/li>\n<\/ul>\n<p>How we navigate these dynamics could determine whether the future goes well or badly.<\/p>\n<p>If we handle things wisely, we could create a flourishing future with unprecedented prosperity for all sentient beings, and could even spread to the stars. But if we lose control of advanced AI, or if bad actors use it to undermine the rest of the world&#8217;s interests, we could face a catastrophe \u2014 like humans permanently losing our ability to shape the future, or going extinct.<\/p>\n<p>In other words, we think these issues have <em><a href=\"\/articles\/existential-risks\/\">existential<\/a> stakes<\/em>, making them among the most pressing problems in the world.<\/p>\n<p>And although we&#8217;re hopeful that <a href=\"#5-work-on-these-problems-is-tractable-but-neglected\">these issues are tractable<\/a>, we can&#8217;t just assume our institutions will navigate them well <em>by default<\/em>. After all, this is confusing, unprecedented territory. And we&#8217;ve seen society stumble into disaster when facing new challenges we haven&#8217;t sufficiently planned for \u2014 just think about the slow institutional responses to early COVID-19 warnings, or the numerous <a href=\"https:\/\/en.wikipedia.org\/wiki\/Nuclear_close_calls\">close calls<\/a> we&#8217;ve seen with <a href=\"\/problem-profiles\/nuclear-security\/\">nuclear weapons<\/a>.<\/p>\n<div class=\"well bg-gray-lighter margin-bottom margin-top padding-top-small padding-bottom-small\">\n<h4>Read more about specific AGI risks<\/h4>\n<p>We&#8217;ve written a series of articles explaining the AI-related issues we think pose the greatest chance of existential catastrophe, why we need people working on them, and what you can do to help.<\/p>\n<ul class=\"list-cards list-no-bullet row display-flex !tw--mb-0 margin-top\">\n<li class=\"col-sm-4 padding-bottom-small\">\n<div class=\"card card--vertical \">\n<div class=\"card__image\">  <a href=\"https:\/\/80000hours.org\/problem-profiles\/risks-from-power-seeking-ai\/\" class=\"card__anchor no-visited-styling tw--break-words\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/08\/panels-photo-720x448.jpg\" alt=\"Decorative post preview\"   class=\"tw--w-full \" style=\"\"  width=\"720\" height=\"448\">  <\/a>  <\/div>\n<div class=\"card__title\">\n<h4><a href=\"https:\/\/80000hours.org\/problem-profiles\/risks-from-power-seeking-ai\/\" class=\"card__anchor no-visited-styling tw--break-words\">Risks from power-seeking AI systems<\/a><\/h4>\n<\/p>\n<\/div>\n<div class=\"card__actions\">  <a href=\"https:\/\/80000hours.org\/problem-profiles\/risks-from-power-seeking-ai\/\" class=\"card__action no-visited-styling\">Read more<\/a><\/div>\n<\/div>\n<\/li>\n<li class=\"col-sm-4 padding-bottom-small\">\n<div class=\"card card--vertical \">\n<div class=\"card__image\">  <a href=\"https:\/\/80000hours.org\/problem-profiles\/extreme-power-concentration\/\" class=\"card__anchor no-visited-styling tw--break-words\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/04\/leviathan_thomas_hobbes_cover-720x448.jpg\" alt=\"Decorative post preview\"   class=\"tw--w-full \" style=\"\"  width=\"720\" height=\"448\">  <\/a>  <\/div>\n<div class=\"card__title\">\n<h4><a href=\"https:\/\/80000hours.org\/problem-profiles\/extreme-power-concentration\/\" class=\"card__anchor no-visited-styling tw--break-words\">Extreme power concentration<\/a><\/h4>\n<\/p>\n<\/div>\n<div class=\"card__actions\">  <a href=\"https:\/\/80000hours.org\/problem-profiles\/extreme-power-concentration\/\" class=\"card__action no-visited-styling\">Read more<\/a><\/div>\n<\/div>\n<\/li>\n<li class=\"col-sm-4 padding-bottom-small\">\n<div class=\"card card--vertical \">\n<div class=\"card__image\">  <a href=\"https:\/\/80000hours.org\/problem-profiles\/moral-status-digital-minds\/\" class=\"card__anchor no-visited-styling tw--break-words\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2024\/09\/image9-720x448.jpg\" alt=\"Decorative post preview\"   class=\"tw--w-full \" style=\"\"  width=\"720\" height=\"448\">  <\/a>  <\/div>\n<div class=\"card__title\">\n<h4><a href=\"https:\/\/80000hours.org\/problem-profiles\/moral-status-digital-minds\/\" class=\"card__anchor no-visited-styling tw--break-words\">Moral status of digital minds<\/a><\/h4>\n<\/p>\n<\/div>\n<div class=\"card__actions\">  <a href=\"https:\/\/80000hours.org\/problem-profiles\/moral-status-digital-minds\/\" class=\"card__action no-visited-styling\">Read more<\/a><\/div>\n<\/div>\n<\/li>\n<li class=\"col-sm-4 padding-bottom-small\">\n<div class=\"card card--vertical \">\n<div class=\"card__image\">  <a href=\"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/\" class=\"card__anchor no-visited-styling tw--break-words\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/03\/\u05e2\u05e5_\u05e2\u05dc_\u05d0\u05d9_\u05de\u05dc\u05d7_\u05d1\u05d0\u05de\u05e6\u05e2_\u05d9\u05dd_\u05d4\u05de\u05dc\u05d7-720x448.jpg\" alt=\"Decorative post preview\"   class=\"tw--w-full \" style=\"\"  width=\"720\" height=\"448\">  <\/a>  <\/div>\n<div class=\"card__title\">\n<h4><a href=\"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/\" class=\"card__anchor no-visited-styling tw--break-words\">Gradual disempowerment<\/a><\/h4>\n<\/p>\n<\/div>\n<div class=\"card__actions\">  <a href=\"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/\" class=\"card__action no-visited-styling\">Read more<\/a><\/div>\n<\/div>\n<\/li>\n<li class=\"col-sm-4 padding-bottom-small\">\n<div class=\"card card--vertical \">\n<div class=\"card__image\">  <a href=\"https:\/\/80000hours.org\/problem-profiles\/catastrophic-ai-misuse\/\" class=\"card__anchor no-visited-styling tw--break-words\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/06\/1203081_07-720x448.jpg\" alt=\"Decorative post preview\"   class=\"tw--w-full \" style=\"\"  width=\"720\" height=\"448\">  <\/a>  <\/div>\n<div class=\"card__title\">\n<h4><a href=\"https:\/\/80000hours.org\/problem-profiles\/catastrophic-ai-misuse\/\" class=\"card__anchor no-visited-styling tw--break-words\">Catastrophic AI misuse<\/a><\/h4>\n<\/p>\n<\/div>\n<div class=\"card__actions\">  <a href=\"https:\/\/80000hours.org\/problem-profiles\/catastrophic-ai-misuse\/\" class=\"card__action no-visited-styling\">Read more<\/a><\/div>\n<\/div>\n<\/li>\n<li class=\"col-sm-4 padding-bottom-small\">\n<div class=\"card card--vertical \">\n<div class=\"card__image\">  <a href=\"https:\/\/80000hours.org\/problem-profiles\/preventing-catastrophic-pandemics\/\" class=\"card__anchor no-visited-styling tw--break-words\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2020\/04\/cdc-hGCng7YZLLs-unsplash-1-720x448.jpg\" alt=\"Decorative post preview\"   class=\"tw--w-full \" style=\"\"  width=\"720\" height=\"448\">  <\/a>  <\/div>\n<div class=\"card__title\">\n<h4><a href=\"https:\/\/80000hours.org\/problem-profiles\/preventing-catastrophic-pandemics\/\" class=\"card__anchor no-visited-styling tw--break-words\">Preventing catastrophic pandemics<\/a><\/h4>\n<\/p>\n<\/div>\n<div class=\"card__actions\">  <a href=\"https:\/\/80000hours.org\/problem-profiles\/preventing-catastrophic-pandemics\/\" class=\"card__action no-visited-styling\">Read more<\/a><\/div>\n<\/div>\n<\/li>\n<\/ul>\n<\/div>\n<h4>The speed of this transition could matter a lot<\/h4>\n<p>There are two ways speed can matter critically to this transition:<\/p>\n<ul>\n<li>It matters how much time we have from now until we have extremely capable and general AI systems<\/li>\n<li>It matters how quickly the world is radically transformed by these systems once they arrive<\/li>\n<\/ul>\n<p>If there&#8217;s only a few years until we get AGI or something similar, then we have limited time to avert the risks.<\/p>\n<p>And if advanced AI changes the world very quickly, we might not have time to adapt to the changing circumstances and make wise decisions.<\/p>\n<p>Even now, our institutions sometimes act too slowly \u2014 for example, it took <a href=\"https:\/\/www.theguardian.com\/environment\/climate-consensus-97-per-cent\/2015\/nov\/05\/scientists-warned-the-president-about-global-warming-50-years-ago-today\">around 50 years from the initial scientific warnings about global warming<\/a> for the milestone Paris Climate Agreement to be signed. Unless we make big changes to how our institutions work, if AI becomes rapidly more capable and more productive, it seems it will be extremely difficult for society to keep up.<\/p>\n<p>There is lively debate over how soon advanced AI systems might arrive and how quickly they might change the world. But there&#8217;s at least a decent chance that they will be here within the next decade and things will change very fast \u2014 indeed, the level of expert concern suggests we need to take this possibility seriously (<a href=\"https:\/\/aiimpacts.org\/wp-content\/uploads\/2023\/04\/Thousands_of_AI_authors_on_the_future_of_AI.pdf\">1<\/a>, <a href=\"https:\/\/ai-2027.com\/\">2<\/a>, <a href=\"https:\/\/situational-awareness.ai\/\">3<\/a>). And given the stakes here, we think it&#8217;s important to prepare for this possibility even if there&#8217;s only a small likelihood \u2014 like a 10% chance \u2014 of it coming true.<\/p>\n<p>This means we can&#8217;t just ignore the risks or delay acting on them. We need to find robust solutions before it&#8217;s too late.<\/p>\n<h3><span id=\"5-work-on-these-problems-is-tractable-but-neglected\" class=\"toc-anchor\"><\/span>5. Work on these problems is tractable but neglected<\/h3>\n<p>We&#8217;ve been helping people who want to work on this problem for over a decade. In this time, the field has grown substantially.<\/p>\n<p>A <a href=\"https:\/\/forum.effectivealtruism.org\/posts\/7YDyziQxkWxbGmF3u\/ai-safety-field-growth-analysis-2025\">2025 analysis<\/a> put the total number of people working on existential risks from AI at 1,100 \u2014 and we think even this might be an undercount, since it only includes organisations that explicitly brand themselves as working on &#8216;AI safety.&#8217;<\/p>\n<p>We&#8217;d estimate that there are actually a few thousand people focusing their work on the most important risks raised by AGI. But to put that into perspective, <a href=\"https:\/\/www.nature.org\/en-us\/\">Nature Conservancy<\/a> alone has 3,000\u20134,000 employees, and it&#8217;s just one of many organisations working on environmental protection and climate change. Other global issues like public health also receive a lot of attention \u2014 for example, the World Health Organisation <a href=\"https:\/\/www.who.int\/about\/who-we-are\">employs over 8,000 people<\/a>.<\/p>\n<p>This means AI risks are severely neglected in comparison to many other world problems \u2014 so each additional person working to address them can make a bigger difference.<\/p>\n<p>We&#8217;re also optimistic that we can make progress on these problems. After all, humans are <em>choosing<\/em> to design and deploy these technologies, which means we have some influence over how things go.<\/p>\n<p>Part of the challenge here is that the people who <em>currently<\/em> have the most influence over AI development aren&#8217;t necessarily incentivised to prioritise safety. AI companies want to make money and face pressures to develop technologies quickly, without fully accounting for the risks they impose on society. Political leaders care about public opinion and election cycles, which gives them less time and motivation to focus on serving broader or longer-term interests. So we need people who want to prioritise using their careers to help others to work on the major challenges that might otherwise be ignored.<\/p>\n<p>There are lots of ways you can help tackle these challenges. Check out our <a href=\"\/ai\/\">hub of AI career resources<\/a> for more.<\/p>\n<h2><span id=\"objections-and-replies\" class=\"toc-anchor\"><\/span>Objections and replies<\/h2>\n<div class=\"panel-group\" id=\"custom-collapse-0\">\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-0\">You're overestimating how fast and how dramatically AI would transform the world.<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-0\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"You're overestimating how fast and how dramatically AI would transform the world.\">\n<div class=\"panel-body\">\n<p>We&#8217;ve argued that automating human labour could transform the world at an unprecedented pace, but there are several ways our argument could be wrong.<\/p>\n<ul>\n<li><strong>An intelligence explosion might not happen.<\/strong> We might deploy a generation of AI workers to automate some fields, but fail to get them to create <em>even better<\/em> or <em>more general<\/em> AI workers \u2014 perhaps because we hit the ceiling of what current AI approaches can achieve, or we fall into another <a href=\"https:\/\/en.wikipedia.org\/wiki\/AI_winter\">&#8220;AI winter&#8221;<\/a>. We&#8217;d still get a one-time increase in the size and efficiency of our workforce, making society much more productive. But we probably wouldn&#8217;t see the dramatic, compounding improvements we described earlier.<\/li>\n<li><strong>The constraints to progress could be stronger than we&#8217;re expecting.<\/strong> Even if AIs <em>do<\/em> help us build increasingly capable AI workers, the feedback loop this creates might not be quite as &#8216;explosive&#8217; as we&#8217;ve described. For example, bottlenecks in AI R&#038;D \u2014 like the availability of compute, energy, and high-quality data \u2014 could mean developing the next generation of AI workers is just a slow process. And in every field we try to automate, the returns to effort could sharply diminish as AI workers quickly exhaust the low-hanging fruit, causing the effects of an intelligence explosion to fizzle out. <\/li>\n<li><strong>Human-dependent tasks could turn out to be critical bottlenecks.<\/strong> Some economically valuable tasks \u2014 for example, those requiring complex interaction with the physical world, or managing projects over weeks or months \u2014 could just take an especially long time to automate. At least in the early stages of automation, the speed of AI-driven progress could be seriously constrained by the pace at which humans can do those remaining tasks.<\/li>\n<li><strong>Our model of human progress could be missing key components.<\/strong> We&#8217;ve argued that increased labour and new ideas can drive rapid progress, pointing to historical examples like the Industrial Revolution. But other drivers we haven&#8217;t explicitly considered here, like institutional or cultural changes, could be crucial \u2014 and at the time we get AIs capable of replacing human workers, these drivers could just be weaker than would be necessary to support something like &#8220;a century of progress in a decade.&#8221; <\/li>\n<\/ul>\n<p>In any of these scenarios, we still think AI could transform the world (and pose serious risks). But this transformation probably wouldn&#8217;t happen <em>as rapidly<\/em> as we&#8217;ve imagined. And as we argued, <a href=\"#the-speed-of-this-transition-could-matter-a-lot\">speed does matter<\/a>: it affects how much time we have to adapt to the changing circumstances and make wise decisions.<\/p>\n<p>If the above objections are correct, it might also be really hard to sustain a period of supercharged innovation and economic production. In that case, progress could fizzle out quickly \u2014 perhaps even <em>before<\/em> we see changes as dramatic as we did during the Industrial Revolution.<\/p>\n<p>But given the scale of the risks here, we think it&#8217;s important to be prepared for a scenario where AI <em>does<\/em> transform the world rapidly and dramatically, even if there&#8217;s a relatively small chance (say, 10%) of this happening.<\/p>\n<p>Still, the uncertainty here does make it harder to weigh up working on AI risks against other pressing problems, like factory farming.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-1\">It's hard to believe AI could really pose existential risks.<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-1\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"It's hard to believe AI could really pose existential risks.\">\n<div class=\"panel-body\">\n<p>This all sounds pretty wild. Could AI really cause outcomes as bad as human extinction?<\/p>\n<p>The argument we made earlier \u2014 that the transformative effects of AI could create unprecedented challenges that threaten humanity&#8217;s survival \u2014 feels convincing to us. But it&#8217;s always worth doing a sanity check on bold and provocative arguments. One way to do that is to look at what people in the field and other leaders say about a topic. So: what do they say?<\/p>\n<p><strong>Several leading institutions are already treating frontier AI as posing catastrophic risks:<\/strong><\/p>\n<ul>\n<li><strong>Researchers and CEOs<\/strong> More than 1,000 AI scientists and industry leaders \u2014 including Geoffrey Hinton, Yoshua Bengio, Sam Altman, and Demis Hassabis \u2014 signed the Center for AI Safety&#8217;s <a href=\"https:\/\/safe.ai\/work\/statement-on-ai-risk\">one-sentence warning<\/a> that &#8220;mitigating the risk of extinction from AI should be a global priority alongside pandemics and nuclear war.&#8221;<\/li>\n<li><strong>National governments.<\/strong> At the UK government&#8217;s <a href=\"https:\/\/www.gov.uk\/government\/publications\/ai-safety-summit-2023-the-bletchley-declaration\/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023\">AI Safety Summit<\/a>, 28 countries (U.S. and China included) issued the Bletchley Declaration, acknowledging &#8220;potential for serious, even catastrophic harm&#8221; from frontier models and pledging joint risk-mitigation work. <\/li>\n<li><strong>US executive action<\/strong> President Biden&#8217;s <a href=\"https:\/\/bidenwhitehouse.archives.gov\/briefing-room\/statements-releases\/2023\/10\/30\/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence\/\">30 Oct 2023 executive order<\/a> compelled US AI companies to share safety-test results with the government before releasing powerful systems \u2014 a measure unprecedented outside biosecurity or nuclear security. President Donald Trump&#8217;s administration has also decided to treat AI as a <a href=\"https:\/\/www.commerce.gov\/news\/press-releases\/2025\/06\/statement-us-secretary-commerce-howard-lutnick-transforming-us-ai\">potential national security threat<\/a>, despite appearing to be <a href=\"https:\/\/www.transformernews.ai\/p\/exclusive-heres-the-draft-trump-executive\">sceptical of the idea that AI could pose catastrophic risks<\/a>.<\/li>\n<\/ul>\n<p><strong>Some leaders disagree:<\/strong><\/p>\n<p>Meta&#8217;s chief scientist <a href=\"https:\/\/fortune.com\/2023\/06\/14\/metas-chief-a-i-scientist-calls-a-i-doomers-preposterous-and-predicts-llms-are-just-a-passing-fad\/\">Yann LeCun<\/a>, for example, has called extinction worries &#8220;preposterous,&#8221; arguing that AI can be engineered to be safe.<\/p>\n<p>Other influential scientists, such as <a href=\"https:\/\/www.france24.com\/en\/live-news\/20230604-human-extinction-threat-overblown-says-ai-sage-marcus\">Gary Marcus<\/a>, <a href=\"https:\/\/x.com\/AndrewYNg\/status\/1665759430552567810\">Andrew Ng<\/a>, and <a href=\"https:\/\/medium.com\/data-science\/existential-risk-from-ai-a-skeptical-perspective-35f0cd7c9fa4\">Melanie Mitchell<\/a> have shared scepticism about the potentially existential risks and transformative effects of AI.<\/p>\n<p><strong>Surveys of AI researchers point to non-trivial extinction odds:<\/strong><\/p>\n<p>Katja Grace of AI Impacts <a href=\"https:\/\/blog.aiimpacts.org\/p\/2023-ai-survey-of-2778-six-things\">surveyed<\/a> 2,778 AI researchers on a range of key questions in the field. The median survey respondent assigned at least a 5% probability that advanced AI could result in human extinction (or a comparable disaster), and roughly one-third to one-half of participants put the risk at 10% or higher.<\/p>\n<p>It&#8217;s possible researchers in their own field are exaggerating the danger \u2014 or underestimating it. Still, this level of concern should prompt us to take the risk very seriously.<\/p>\n<p><strong>Forecasters take note (but doubt the risks):<\/strong><\/p>\n<p>The Forecasting Research Institute conducted the Existential Risk Persuasion Tournament in 2022 to investigate <a href=\"https:\/\/80000hours.org\/2024\/09\/why-experts-and-forecasters-disagree-about-ai-risk\/\">disagreements on this topic<\/a>.<\/p>\n<p>Overall, they found that AI raised the biggest concern about existential risk from the participants of all the topics covered. But there was a big split in opinion on the risks between domain experts in AI and people with a strong track record in superforecasting:<\/p>\n<ul>\n<li>Domain experts in AI estimated a 3% chance of AI-caused human extinction by 2100 on average, while superforecasters put it at just 0.38%.<\/li>\n<li>Both groups agreed on a high likelihood of &#8220;powerful AI&#8221; being developed by 2100 (around 90%).<\/li>\n<li>Even AI risk sceptics saw a 30% chance of catastrophic AI outcomes over a 1,000-year timeframe.<\/li>\n<\/ul>\n<p>Note, though, that given the developments in AI since 2022, we&#8217;d expect both groups would now <a href=\"\/2025\/03\/when-do-experts-expect-agi-to-arrive\/\">predict timelines to powerful AI to be significantly shorter<\/a>. We think this would likely raise their estimates of the risks.<\/p>\n<p><strong>Overall view:<\/strong><\/p>\n<p>Many leaders and experts recognise the potential of AI to pose major risks, including at the level of human extinction. But unlike other problems that humanity faces, such as climate change, this isn&#8217;t a matter of scientific consensus \u2014 there&#8217;s ongoing disagreement, and many credible people think the risks are lower than we do.<\/p>\n<p>Still, given the stakes, we think it would be reckless to dismiss the idea that AI could cause outcomes like human extinction. Even a relatively small chance of catastrophe is worth taking seriously.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-2\">Isn't all of this talk of AI changing the world just a fad?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-2\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"Isn't all of this talk of AI changing the world just a fad?\">\n<div class=\"panel-body\">\n<p>Some people think arguments like those in this article are just a response to the current wave of AI hype and won&#8217;t stand the test of time.<\/p>\n<p>It&#8217;s possible we&#8217;ve updated our beliefs too strongly on the basis of the latest AI developments, and our predictions could turn out to be wrong. But it&#8217;s worth noting that the basic ideas of this article are not especially novel or unique to our particular time period. Prominent thinkers have been warning us about the dangers and transformative potential of AI since the 1800s:<\/p>\n<ul>\n<li>1863: English novelist Samuel Butler speculated in a <a href=\"https:\/\/en.wikipedia.org\/wiki\/Darwin_among_the_Machines\">letter<\/a> that machines would eventually surpass humanity, with humans becoming the &#8220;inferior species.&#8221;<\/li>\n<li>1920: Playwright Karel \u010capek, who coined the word &#8220;robot,&#8221; wrote a <a href=\"https:\/\/en.wikipedia.org\/wiki\/R.U.R.\">play<\/a> in which artificial workers rebel and eventually cause human extinction.<\/li>\n<li>1940\u20131950: Isaac Asimov wrote a <a href=\"https:\/\/en.wikipedia.org\/wiki\/I,_Robot\">series of stories<\/a> about AI and robots, which highlighted the need to ensure their safety for humanity and suggested they&#8217;d develop the ability to steer humanity&#8217;s future.<\/li>\n<li>1950: John von Neumann, a prolific and highly influential physicist and mathematician, <a href=\"https:\/\/lab.cccb.org\/en\/the-singularity\/\">reportedly said<\/a>: &#8220;The ever-accelerating progress of technology and changes in the mode of human life [\u2026] gives the appearance of approaching some essential singularity in the history of the race.&#8221;<\/li>\n<li>1951: Alan Turing, considered the father of theoretical computer science, <a href=\"https:\/\/uberty.org\/wp-content\/uploads\/2015\/02\/intelligent-machinery-a-heretical-theory.pdf\">wrote<\/a>: &#8220;it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers&#8230; At some stage therefore we should have to expect the machines to take control\u2026&#8221;<\/li>\n<li>1965: Mathematician I.J. Good <a href=\"https:\/\/www.historyofinformation.com\/detail.php?id=2142\">said<\/a>: &#8220;Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an &#8216;intelligence explosion,&#8217; and the intelligence of man would be left far behind.&#8221;<\/li>\n<\/ul>\n<p>We don&#8217;t take any of these claims as strong <em>evidence<\/em> for the case that AI poses existential risks. After all, many historical figures \u2014 even extremely smart and influential scientists \u2014 have had erroneous beliefs about the future.<\/p>\n<p>But they do show us that the argument that this is all &#8220;just a fad&#8221; doesn&#8217;t hold up to scrutiny.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-3\">Isn't AI going to be just like every other technology?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-3\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"Isn't AI going to be just like every other technology?\">\n<div class=\"panel-body\">\n<p>In some senses, yes \u2014 but this doesn&#8217;t mean we shouldn&#8217;t be worried about it.<\/p>\n<p>Like many other technologies, AI has had its own cycles of hype and bust. Some people think the current trajectory of high investment and fast progress could fizzle out, and even be followed by another &#8220;<a href=\"https:\/\/en.wikipedia.org\/wiki\/AI_winter\">AI winter<\/a>.&#8221;<\/p>\n<p>But that&#8217;s not a good reason to ignore the risks. Although it&#8217;s likely some of the drivers of AI progress will eventually slow down, we think there&#8217;s <a href=\"\/agi\/guide\/when-will-agi-arrive\/\">a good chance we&#8217;ll already have AGI (or something similar) by the time this happens<\/a>. And at that point, we could already be facing major challenges we&#8217;d wish we were more prepared for.<\/p>\n<p>It&#8217;s also worth noting that AI doesn&#8217;t need to be <em>fundamentally different<\/em> to previous technologies to change the world and pose really serious risks. After all, other general-purpose technologies like the steam engine have done this before \u2014 the Industrial Revolution fuelled huge growth, but also precipitated climate change and laid the groundwork for the invention of nuclear weapons.<\/p>\n<p>Even if AI were &#8216;just&#8217; another general-purpose technology, it could be as impactful as this. And that alone would be a big deal.<\/p>\n<p>But we do think there are ways in which AI might be genuinely different from anything humanity has previously seen, making it potentially <em>more<\/em> transformative and <em>more<\/em> risky than previous technologies.<\/p>\n<p>For one thing, we argued in Sections 1, 2, and 3 above that AI systems could effectively <em>take over<\/em> the processes of innovation and economic production \u2014 meaning progress would no longer be reliant on human labour, and could happen much faster than ever before.<\/p>\n<p>And even if you&#8217;re sceptical of that particular story, it still seems hard to deny that <em>something<\/em> unprecedented is happening here. For the first time, we&#8217;re designing a new form of intelligence that will potentially surpass ours. We could encounter a whole new population of highly capable agents with their own interests \u2014 and perhaps even the capacity for welfare and suffering. In some senses, they could be our competitors. And the dynamics this introduces could be unlike anything humanity has ever had to navigate before.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><span id=\"is-it-even-possible\" class=\"toc-anchor\"><\/span><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-4\">Is it even possible to produce artificial general intelligence?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-4\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"Is it even possible to produce artificial general intelligence?\">\n<div class=\"panel-body\">\n<p>People have been saying <a href=\"https:\/\/web.archive.org\/web\/20221013015416\/https:\/\/www.openphilanthropy.org\/research\/what-should-we-learn-from-past-ai-forecasts\/\">since the 1950s<\/a> that artificial intelligence smarter than humans is just around the corner.<\/p>\n<p>But it hasn&#8217;t happened yet.<\/p>\n<p>Some have argued that producing <a href=\"https:\/\/web.archive.org\/web\/20221013015335\/https:\/\/www.nature.com\/articles\/s41599-020-0494-4\">artificial general intelligence is fundamentally impossible<\/a>. Others think it&#8217;s possible in theory, but <a href=\"https:\/\/web.archive.org\/web\/20221013015342\/https:\/\/www.forbes.com\/sites\/cognitiveworld\/2020\/06\/04\/is-ai-overhyped\/?sh=443c7b6c63ee\">unlikely to actually happen<\/a>, especially not with <a href=\"https:\/\/web.archive.org\/web\/20221013015350\/https:\/\/www.kdnuggets.com\/2021\/12\/deep-neural-networks-not-toward-agi.html\">current deep learning methods<\/a>.<\/p>\n<p>However, we think there are compelling reasons to believe AGI is achievable:<\/p>\n<ul>\n<li>The existence of human intelligence demonstrates that general intelligence is at least <em>possible in principle<\/em>. Human brains are made of ordinary matter following the same physical laws as computers.<\/li>\n<li>While past predictions were overly optimistic about how long it&#8217;d take, they weren&#8217;t necessarily wrong about the fundamental possibility of AGI. The field ran into blockers early on, but researchers found ways around them using creative new methods \u2014 and they now have access to vastly more computational power to run experiments and train new AIs than we could have imagined a few decades ago. <\/li>\n<li>In recent years, we&#8217;ve seen progress we don&#8217;t think would have been predicted by those who believed powerful, general AI would never be developed. For example, large language models have demonstrated emergent behaviours that weren&#8217;t explicitly programmed, like <a href=\"https:\/\/arxiv.org\/pdf\/2005.14165\">few-shot learning<\/a>, <a href=\"https:\/\/www.nature.com\/articles\/s41562-023-01659-w\">analogical reasoning<\/a>, and <a href=\"https:\/\/arxiv.org\/abs\/2104.08410\">cross-domain transfer<\/a>. <\/li>\n<li>Though some argue current AI methods will never grasp certain forms of intelligent reasoning, these critiques have often been proved wrong. For example, Yann LeCun <a href=\"https:\/\/x.com\/cammakingminds\/status\/1659516423540965378\">claimed in 2022<\/a> that deep learning-based models like ChatGPT would never be able to tell you what would happen if you placed an object on a table and then pushed that table, because such a basic situation was never described explicitly in text \u2014 but deep learning-based models can now walk you through scenarios like this with ease. In the words of AI researcher Leopold Aschenbrenner: <a href=\"https:\/\/situational-awareness.ai\/wp-content\/uploads\/2024\/06\/situationalawareness.pdf#page=17\">&#8220;if there&#8217;s one lesson we&#8217;ve learned from the past decade of AI, it&#8217;s that you should never bet against deep learning.&#8221;<\/a> <\/li>\n<\/ul>\n<p>There&#8217;s real uncertainty here, and the sceptics might be right that there are some things advanced AI systems will just never achieve. But for AI to transform the world, the important question isn&#8217;t whether we&#8217;ll replicate every aspect of human cognition exactly. It&#8217;s whether we can create systems that can:<\/p>\n<ul>\n<li>Match or exceed human performance across the tasks that matter most for scientific research, economic productivity, and other domains where intelligence is most valuable<\/li>\n<li>Perform those tasks faster or more cheaply than human workers can <\/li>\n<li>Work autonomously enough that progress in the fields they automate is no longer bottlenecked on the speed of human labour<\/li>\n<\/ul>\n<p>All three of these things seem quite possible.<\/p>\n<p>And even this much may not be necessary for AI to pose serious \u2014 or even existential-scale \u2014 risks. For example, <a href=\"\/problem-profiles\/catastrophic-ai-misuse\/\">our argument that people could catastrophically misuse AI<\/a> mostly depends on AI systems becoming useful tools for designing weapons. An AI that&#8217;s great at assisting humans with biotechnology research could make it far easier for people to develop dangerous pathogens \u2014 regardless of how well it performs at other types of research, or how much human oversight it needs.<\/p>\n<p>So even if you think we&#8217;ll never build AIs that are <em>fully<\/em> general or <em>completely<\/em> autonomous, the risks could still be extremely serious.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-5\">Even if AGI is achievable, what if we're really far away from building it?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-5\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"Even if AGI is achievable, what if we're really far away from building it?\">\n<div class=\"panel-body\">\n<p>There&#8217;s lively debate over when we&#8217;ll build &#8216;AGI&#8217; (or other advanced AI systems capable of transforming the world in the ways we&#8217;ve described).<\/p>\n<p>We think there&#8217;s a decent chance this will happen soon \u2014 perhaps <a href=\"\/agi\/guide\/when-will-agi-arrive\/\">within the next decade<\/a>. And <a href=\"\/2025\/03\/when-do-experts-expect-agi-to-arrive\/\">we&#8217;re not alone<\/a>.<\/p>\n<p>But it&#8217;s worth considering other possibilities. For example, researcher Ege Erdil has made <a href=\"https:\/\/epoch.ai\/gradient-updates\/the-case-for-multi-decade-ai-timelines\">an influential argument for AGI being multiple decades away<\/a>. And some people think it&#8217;s even further out than that.<\/p>\n<p>Plus, even people who think there&#8217;s a good chance that AGI (or something like it) will arrive soon tend to <em>also<\/em> think there&#8217;s a good chance that it will take a while. Their &#8216;probability distributions&#8217; for when AGI will arrive are usually shaped something like this:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/03\/image22.png\" alt=\"Estimate of AGI development timeline\" \/><\/p>\n<p>Even if AGI is many decades away, we think it will still transform the world when it arrives, and create unprecedented challenges. But on this longer timeframe, work to address these challenges would be less <em>urgent<\/em>, because we&#8217;d have more time to prepare.<\/p>\n<p>Despite this, we still think it makes sense for many people to focus on AI risks <em>now<\/em>. That&#8217;s because:<\/p>\n<ul>\n<li>There&#8217;s huge uncertainty around how long it will take for AGI to be developed. We need to prepare for the chance that it happens very soon, so that we&#8217;re covered in the worst-case scenarios. <\/li>\n<li>Some issues with advanced AI might just take a long time to solve. Deep technical challenges could take many years of research to untangle, and some governance issues might require us to redesign how our institutions work \u2014 which won&#8217;t happen overnight. Putting more work in <em>now<\/em> will give us a better chance of navigating the risks competently when they start to emerge. <\/li>\n<li>Many people who could help a lot in a decade&#8217;s time should start now \u2014 especially if they are early in their career. It takes time to build up expertise and career capital, so &#8220;we still have years&#8221; isn&#8217;t a reason not to get started.<\/li>\n<\/ul>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><span id=\"isnt-the-real-danger-from-actual-modern-ai-not-some-sort-of-futuristic-superintelligence\" class=\"toc-anchor\"><\/span><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-6\">Isn't the real danger from actual current AI \u2014 not some sort of futuristic AGI?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-6\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"Isn't the real danger from actual current AI \u2014 not some sort of futuristic AGI?\">\n<div class=\"panel-body\">\n<p>There are definitely dangers from current artificial intelligence.<\/p>\n<p>For example:<\/p>\n<ul>\n<li>AI has frequently been linked to child safety concerns \u2014 with reports of AI chatbots <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/22\/grok-ai-generated-millions-sexualised-images-in-month-research-says\">generating sexualised images of children<\/a>, <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/27\/meta-lawsuit-minors-chatbots\">engaging minors in sexual conversations<\/a>, and in some cases, even <a href=\"https:\/\/www.npr.org\/sections\/shots-health-news\/2025\/09\/19\/nx-s1-5545749\/ai-chatbots-safety-openai-meta-characterai-teens-suicide\">encouraging emotionally dependent teenage users to commit suicide<\/a>.   <\/li>\n<li>Data used to train neural networks often contains hidden biases. This means that AI systems can learn these biases \u2014 and this can lead to <a href=\"https:\/\/ischool.uw.edu\/news\/2024\/11\/ai-tools-show-biases-ranking-job-applicants-names-according-perceived-race-and-gender\">racist and sexist behaviour<\/a>. <\/li>\n<li>AI models are <a href=\"https:\/\/www.theguardian.com\/technology\/article\/2024\/aug\/20\/anthropic-ai-lawsuit-author\">trained on copyrighted material without permission or compensation<\/a>, raising serious questions about intellectual property rights and threatening the livelihoods of artists, writers, and creators. <\/li>\n<li>AI tools make it easier to run sophisticated scams at scale \u2014 like <a href=\"https:\/\/www.theguardian.com\/technology\/article\/2024\/may\/17\/uk-engineering-arup-deepfake-scam-hong-kong-ai-video\">deepfake videos impersonating senior employees of companies to authorise fraudulent money transfers<\/a>. <\/li>\n<\/ul>\n<p>These dangers are real and serious \u2014 and lots of people should focus on addressing them. But we still think that the amount of work going towards longer-term AI risks needs to significantly increase.<\/p>\n<p>The good news is that there isn&#8217;t always a big tradeoff between addressing shorter-term or longer-term AI risks. Lots of work that&#8217;s geared towards existential threats from AI systems is <em>also<\/em> relevant to solving problems with existing AI systems. For example, some AI safety research focuses on <a href=\"https:\/\/web.archive.org\/web\/20221013015650\/https:\/\/deepmindsafetyresearch.medium.com\/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84\">ensuring that machine learning models do what we want them to, and will still do this as their size and capabilities increase<\/a>; other research tries to work out <a href=\"\/podcast\/episodes\/chris-olah-interpretability-research\/\">how and why existing models are taking the actions that they are<\/a>. Both of these things would help us prevent future AI systems from taking power \u2014 but they&#8217;d probably also help us prevent <em>current<\/em> AI systems from discriminating against marginalised groups or exploiting vulnerable users.<\/p>\n<p>We also think the current dangers are just the tip of the iceberg. As AI systems get more capable, the risks could get increasingly serious. As we&#8217;ve argued, future systems seem like they could pose threats not only to individual humans, but also to the <em>very existence<\/em> of humanity \u2014 say, by enabling a catastrophic pandemic that wipes out much of the population, or helping a small group establish a long-lasting authoritarian regime.<\/p>\n<p>Ultimately, not all work on future risks will translate neatly into progress on today&#8217;s issues. But we have limited time in our careers, and <a href=\"\/articles\/your-choice-of-problem-is-crucial\/\">choosing which problem to focus on<\/a> could be a huge way of increasing your impact. And it seems important for many people (though not all!) to focus on addressing the worst-case possibilities. <a href=\"\/problem-profiles\/#inappropriate-to-rank\">Read more on why we think it&#8217;s appropriate to prioritise between issues<\/a>.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-7\">Technological progress is a good thing for humanity.<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-7\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"Technological progress is a good thing for humanity.\">\n<div class=\"panel-body\">\n<p>Technological optimists point out that past technologies have generally made life better, not worse. Why should AI be different?<\/p>\n<p>While technology has indeed brought many benefits, it has also created new risks and challenges. Developing nuclear weapons gave us both nuclear power and the threat of nuclear war. Advanced biomedical science has cured many diseases, but it also raises the risk of bioweapons and disastrous leaks of dangerous pathogens. Industrial factory farming has made for cheaper meat, but it also is a moral catastrophe for the animals themselves and has many negative side effects for humans.<\/p>\n<p>We agree that technology has <em>usually<\/em> benefited humanity overall, but the question is whether it will <em>in this case<\/em>.<\/p>\n<p>There are enough precedents of dangerous technological developments to be cautious, and there are specific reasons in this case for concern, as we&#8217;ve discussed above. And given the potential scale and speed of AI development, the margin for error may be smaller than with previous technologies.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-8\">This all just sounds too sci-fi.<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-8\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"This all just sounds too sci-fi.\">\n<div class=\"panel-body\">\n<p>That something sounds like science fiction isn&#8217;t a reason in itself to dismiss it outright. There are lots of examples of things first mentioned in sci-fi that then went on to actually happen (this <a href=\"https:\/\/web.archive.org\/web\/20221013020159\/http:\/\/www.technovelgy.com\/ct\/ctnlistalpha.asp\">list of inventions in science fiction<\/a> contains plenty of examples).<\/p>\n<p>There are even a few such cases involving technology that are real existential threats today:<\/p>\n<ul>\n<li>In his 1914 novel <em>The World Set Free<\/em>, H. G. Wells predicted atomic energy fueling powerful explosives \u2014 20 years before we realised there could in theory be nuclear fission chain reactions, and 30 years before nuclear weapons were actually produced. In the 1920s and 1930s, Nobel Prize-winning physicists <a href=\"https:\/\/www.youtube.com\/watch?v=HD3k1hgbUXQ\">Millikan, Rutherford, and Einstein all predicted that we would never be able to use nuclear power<\/a>. Nuclear weapons were literal science fiction before they were reality. <\/li>\n<li>In the 1964 film <em>Dr. Strangelove<\/em>, the USSR builds a doomsday machine that would automatically trigger an extinction-level nuclear event in response to a nuclear strike, but keeps it secret. Dr Strangelove points out that keeping it secret rather reduces its deterrence effect. But we now know that in the 1980s the USSR built an <a href=\"https:\/\/en.wikipedia.org\/wiki\/Dead_Hand\">extremely similar system<\/a>&#8230; and kept it secret.<\/li>\n<\/ul>\n<p>It&#8217;s reasonable when you hear something that sounds like science fiction to want to investigate it thoroughly before acting on it. But having investigated it, if the arguments are solid, then simply sounding like science fiction is not a reason to dismiss them.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><span id=\"may-or-may-not-ever-happen\" class=\"toc-anchor\"><\/span><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-9\">Can it make sense to dedicate my career to solving an issue based on a speculative story about something that may or may not ever happen?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-9\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"Can it make sense to dedicate my career to solving an issue based on a speculative story about something that may or may not ever happen?\">\n<div class=\"panel-body\">\n<p>We never know for sure what&#8217;s going to happen in the future. So, unfortunately for us, if we&#8217;re trying to have a positive impact on the world, that means we&#8217;re always having to deal with at least some degree of uncertainty.<\/p>\n<p>We also think there&#8217;s an important distinction between <em>guaranteeing that you&#8217;ve achieved some amount of good<\/em> and <em>doing the very best you can<\/em>. To achieve the former, you can&#8217;t take any risks at all \u2014 and that could mean missing out on the <a href=\"\/articles\/be-more-ambitious\/\">best opportunities to do good<\/a>.<\/p>\n<p>When you&#8217;re dealing with uncertainty, it makes sense to roughly think about the <a href=\"\/articles\/expected-value\/\">expected value<\/a> of your actions: the sum of all the good and bad potential consequences of your actions, weighted by their probability. Expected value isn&#8217;t the <em>only<\/em> framework to use \u2014 we also think it&#8217;s important to temper your estimates of expected value using common sense and other heuristics \u2014 but it&#8217;s a really useful indicator of how important a certain course of action is.<\/p>\n<p>Given the stakes are so high, and the probabilities of the risks from AI aren&#8217;t that low, this makes the expected value of helping with this problem high.<\/p>\n<p>We&#8217;re sympathetic to the concern that if you work on AI, you might end up doing not much at all when you might have done a tremendous amount of good working on something that&#8217;s more certain. But we think the world will be better off if we decide that some of us should work on solving these problems, so that together we have the best chance of successfully navigating the transition to a world with advanced AI rather than risking an existential crisis.<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-10\">OK, AI might pose existential risks. But isn't issue X an even bigger problem?<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-10\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"OK, AI might pose existential risks. But isn't issue X an even bigger problem?\">\n<div class=\"panel-body\">\n<p>You might think it doesn&#8217;t make sense to focus on the risks from future AI systems when the world faces so many other challenges.<\/p>\n<p>For example, you might want to do whatever you can to prevent the most death and suffering that&#8217;s happening <em>now<\/em>. This would probably lead you to prioritise addressing <a href=\"\/problem-profiles\/factory-farming\/\">factory farming<\/a> or even <a href=\"\/problem-profiles\/wild-animal-welfare\/\">wild animal suffering<\/a>, since these issues concern present harms and are also incredibly neglected relative to their scale.<\/p>\n<p>Even if you do want to focus on making humanity&#8217;s future go well, you might feel the risks from future AI systems are just too uncertain. In that case, you&#8217;d probably choose to work on threats that feel more concrete at this stage, like <a href=\"\/problem-profiles\/great-power-conflict\/\">catastrophic wars<\/a>.<\/p>\n<p>It&#8217;s certainly reasonable to prioritise working on something else over AI risks. It would be arrogant to claim we&#8217;ve figured out all the world&#8217;s problems well enough to know the most pressing ones are <em>definitely<\/em> all downstream of powerful AI.<\/p>\n<p>But we still think focusing on the risks from advanced AI is often a bet worth making, because:<\/p>\n<ul>\n<li>As we&#8217;ve explained, there&#8217;s a material possibility future AI systems could cause humans to go extinct or permanently lose control of the future. <\/li>\n<li>And as time goes on, more and more of the theoretical reasons for concern \u2014 like <a href=\"\/problem-profiles\/risks-from-power-seeking-ai\/\">the potential for deceptive behaviour in AI systems<\/a> \u2014 are being borne out in practice. <\/li>\n<li>If AI does transform the world, this would probably shape all the other challenges society faces, and dictate how they can or should be addressed. For example, what happens with AI might determine the military capabilities of the world&#8217;s greatest powers, as well as the diplomatic tools they use to handle conflicts. So making sure powerful AI is handled responsibly could be a big component of addressing many other world problems. <\/li>\n<\/ul>\n<p>We don&#8217;t think <em>everyone<\/em> reading this should drop what they&#8217;re doing to work on AI risks, and we&#8217;re still excited to see people make progress on other pressing problems. But if you can find a role focused on AI risks that really suits you, we think there&#8217;s a very good chance that&#8217;s the highest <a href=\"\/articles\/problem-framework\/\">expected impact<\/a> thing you could do.<\/p>\n<\/div><\/div><\/div>\n<\/div>\n<h2><span id=\"whats-next\" class=\"toc-anchor\"><\/span>What&#8217;s next?<\/h2>\n<p>Inspired to work on addressing the risks from advanced AI?<\/p>\n<p><script>\n    function getLocationString(arr) {\n      if (arr.length <= 3) { \n        return arr.join(\"<br \/>\");\n      }\n      return arr.slice(0, 3).join(\"<br \/>\") + \"...\";\n    }\n  <\/script><script>\n    function getUniqueCompanyJobs(jobs, limit) {\n      const uniqueCompanies = new Set();\n      const uniqueJobs = [];\n      const additionalJobs = [];\n      for (const job of jobs) {\n          const company = job.company_name;\n          if (!uniqueCompanies.has(company)) {\n              uniqueCompanies.add(company);\n              uniqueJobs.push(job);\n          } else {\n              additionalJobs.push(job);\n          }\n      }\n      return uniqueJobs.concat(additionalJobs).slice(0, limit);\n    }\n  <\/script><script>\n    window.addEventListener(\"load\", function() {\n        const container = document.querySelector(\"#vacancies-1\");\n        if (container) {\n          const searchClient = algoliasearch(\"W6KM1UDIB3\", \"d1d7f2c8696e7b36837d5ed337c4a319\");\n          searchClient.initIndex(\"jobs_prod\"); \n          const search = instantsearch({\n            indexName: \"jobs_prod\",\n            searchClient,\n          });\n          search.addWidget(\n            instantsearch.widgets.configure({\n              facetFilters: [[\"tags_area:AI safety & policy\"]],\n              hitsPerPage: 10,\n            })\n          );\n          search.addWidget({\n            render(options) {\n              const results = getUniqueCompanyJobs(options.results.hits, 5);\n              results.forEach(item => {\n                item.post_pk = DOMPurify.sanitize(item.post_pk);\n                item.company.logo_url = DOMPurify.sanitize(item.company.logo_url);\n                item.title = DOMPurify.sanitize(item.title);\n                item.company.name = DOMPurify.sanitize(item.company.name);\n                item.card_locations = DOMPurify.sanitize(getLocationString(item.card_locations));\n                item.posted_at_relative = DOMPurify.sanitize(item.posted_at_relative);\n              });\n              container.innerHTML = results.map(item => {\n                return `<\/p>\n<li class=\"vacancy border\">\n                    <a href=\"https:\/\/jobs.80000hours.org\/?jobPk=${item.post_pk}\" target=\"_blank\" rel=\"noopener noreferrer\" class=\"vacancy-summary pt-2 pb-2\"><\/p>\n<div class=\"col-12\">\n<div class=\"row\" style=\"position: relative;\">\n<div class=\"col-sm-8\" style=\"overflow: hidden;\">\n<div class=\"vacancy__org-logo\">\n                              <img decoding=\"async\" src=\"${item.company.logo_url}\">\n                            <\/div>\n<div class=\"vacancy__job-title-and-org-name\">\n<h5 class=\"vacancy__job-title tw--line-clamp-2\">${item.title}<\/h5>\n<p class=\"vacancy__org-name tw--line-clamp-2\">${item.company.name}<\/p><\/div><\/div>\n<div class=\"col-sm-4 text-right hidden-xs vacancy__location-and-date-listed\">\n<p class=\"pr-1\">${item.card_locations}<br \/>${item.posted_at_relative}<\/p><\/div><\/div><\/div>\n<p>                    <\/a>\n                  <\/li>\n<p>`;\n              }).join(\"\");\n            }\n          });\n          search.start();\n        }\n      });\n    <\/script><\/p>\n<p>Our job board features opportunities in AI technical safety and governance:<\/p>\n<ul id=\"vacancies-1\" class=\"!tw--p-0 no-visited-styling disable-url-preview-on-hover-for-descendants\"><\/ul>\n<p><a href=https:\/\/jobs.80000hours.org\/?refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy class=\"btn btn-primary\" target=\"_blank\">View all opportunities<\/a><\/p>\n<div class=\"well bg-gray-lighter margin-bottom margin-top padding-top-small padding-bottom-small\">\n<h3><span id=\"want-one-on-one-advice-on-pursuing-this-path\" class=\"toc-anchor\"><\/span>Want one-on-one advice on pursuing this path?<\/h3>\n<p>If you think this path might be a great option for you, but you need help deciding or thinking about what to do next, our team might be able to help.<\/p>\n<p>We can help you compare options, make connections, and possibly even help you find jobs or funding opportunities.<\/p>\n<p><a href=\"\/speak-with-us\/\" title=\"\" class=\"btn btn-primary\">APPLY TO SPEAK WITH OUR TEAM<\/a><\/p>\n<\/div>\n<h2><span id=\"learn-more\" class=\"toc-anchor\"><\/span>Learn more<\/h2>\n<ul>\n<li><a href=\"https:\/\/web.archive.org\/web\/20260219143924\/https:\/\/www.forethought.org\/research\/preparing-for-the-intelligence-explosion\">Preparing for the intelligence explosion<\/a> by Will MacAskill and Fin Moorhouse<\/li>\n<li><a href=\"\/articles\/how-ai-driven-feedback-loops-could-make-things-very-crazy-very-fast\/\">How AI-driven feedback loops could make things very crazy, very fast<\/a> by Benjamin Todd<\/li>\n<li><a href=\"https:\/\/www.cold-takes.com\/most-important-century\/\">The Most Important Century<\/a>, a series by Holden Karnofsky and other authors<\/li>\n<li><a href=\"https:\/\/www.openphilanthropy.org\/research\/could-advanced-ai-drive-explosive-economic-growth\/\">Could advanced AI drive explosive economic growth?<\/a> by Tom Davidson<\/li>\n<li><a href=\"https:\/\/ai-2027.com\/\">AI 2027<\/a> by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean (we&#8217;re sceptical that things will play out <em>quite as quickly<\/em> as in the scenario this describes \u2014 and in fact, the authors have published <a href=\"https:\/\/www.aifuturesmodel.com\/\">an updated model<\/a> with more modest predictions of when AI will reach certain capabilities milestones) <\/li>\n<li>Our <a href=\"\/ai\/guide\/summary\/\">AI guide summary<\/a> <\/li>\n<li><a href=\"\/ai\/guide\/when-will-agi-arrive\/\">Will we have AGI by 2030?<\/a> by Benjamin Todd<\/li>\n<li><a href=\"\/2025\/03\/when-do-experts-expect-agi-to-arrive\/\">When do experts expect AGI?<\/a> by Benjamin Todd<\/li>\n<\/ul>\n<p><a href=\"\/ai\" title=\"\" class=\"btn btn-primary\">Read more about AGI careers<\/a><\/p>\n<h2><span id=\"acknowledgements\" class=\"toc-anchor\"><\/span>Acknowledgements<\/h2>\n<p><em>Many thanks to Cody Fenwick, who drafted an earlier version of this article (much of which was incorporated here).<\/em><\/p>\n<p><em>Thanks also to Arden Koehler, Adam Bales, Andreas Mogensen, Benjamin Todd, Niel Bowerman, and Aaron Gertler for input.<\/em><\/p>\n","protected":false},"author":471,"featured_media":94655,"parent":0,"menu_order":0,"template":"","meta":{"_acf_changed":false,"footnotes":""},"categories":[1353,291],"class_list":["post-94492","problem_profile","type-problem_profile","status-publish","has-post-thumbnail","hentry","category-ai","category-world-problems"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Why AI risks are the world\u2019s most pressing problems | 80,000 Hours<\/title>\n<meta name=\"description\" content=\"AGI could rapidly transform the world, posing existential risks to humanity. Here\u2019s why working on AI risks could be the most important use of your career.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The risks of advanced AI\" \/>\n<meta property=\"og:description\" content=\"Could working on AI risks be the highest-impact career choice today? Explore why AI may trigger rapid, dramatic societal change \u2014 and what you can do about it.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/\" \/>\n<meta property=\"og:site_name\" content=\"80,000 Hours\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/80000Hours\" \/>\n<meta property=\"article:modified_time\" content=\"2026-04-22T16:56:25+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/Neural_network_-_Midjourney_and_Grok.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@80000hours\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"47 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/artificial-intelligence\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/artificial-intelligence\\\/\"},\"author\":{\"name\":\"Zershaaneh Qureshi\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/person\\\/46f90c575aa95e68cce975168cf4f0f5\"},\"headline\":\"Advanced AI poses the world&#8217;s most pressing problems. Here&#8217;s&nbsp;why.\",\"datePublished\":\"2026-02-24T06:48:11+00:00\",\"dateModified\":\"2026-04-22T16:56:25+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/artificial-intelligence\\\/\"},\"wordCount\":9605,\"publisher\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/artificial-intelligence\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2026\\\/02\\\/Neural_network_-_Midjourney_and_Grok.png\",\"articleSection\":[\"AI\",\"World problems\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/artificial-intelligence\\\/\",\"url\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/artificial-intelligence\\\/\",\"name\":\"Why AI risks are the world\u2019s most pressing problems | 80,000 Hours\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/artificial-intelligence\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/artificial-intelligence\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2026\\\/02\\\/Neural_network_-_Midjourney_and_Grok.png\",\"datePublished\":\"2026-02-24T06:48:11+00:00\",\"dateModified\":\"2026-04-22T16:56:25+00:00\",\"description\":\"AGI could rapidly transform the world, posing existential risks to humanity. Here\u2019s why working on AI risks could be the most important use of your career.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/artificial-intelligence\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/artificial-intelligence\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/artificial-intelligence\\\/#primaryimage\",\"url\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2026\\\/02\\\/Neural_network_-_Midjourney_and_Grok.png\",\"contentUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2026\\\/02\\\/Neural_network_-_Midjourney_and_Grok.png\",\"width\":1024,\"height\":1024,\"caption\":\"Midjourney; prompt suggested by Grok, Public domain, via Wikimedia Commons\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/artificial-intelligence\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/80000hours.org\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Advanced AI poses the world&#8217;s most pressing problems. Here&#8217;s&nbsp;why.\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#website\",\"url\":\"https:\\\/\\\/80000hours.org\\\/\",\"name\":\"80,000 Hours\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/80000hours.org\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\",\"name\":\"80,000 Hours\",\"url\":\"https:\\\/\\\/80000hours.org\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2018\\\/07\\\/og-logo_0.png\",\"contentUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2018\\\/07\\\/og-logo_0.png\",\"width\":1500,\"height\":785,\"caption\":\"80,000 Hours\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/80000Hours\",\"https:\\\/\\\/x.com\\\/80000hours\",\"https:\\\/\\\/www.youtube.com\\\/user\\\/eightythousandhours\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/person\\\/46f90c575aa95e68cce975168cf4f0f5\",\"name\":\"Zershaaneh Qureshi\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/6cd1e78a791c368b871b2dc5101c12663bd2f74a621ec3ddaef2dfc9f5078612?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/6cd1e78a791c368b871b2dc5101c12663bd2f74a621ec3ddaef2dfc9f5078612?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/6cd1e78a791c368b871b2dc5101c12663bd2f74a621ec3ddaef2dfc9f5078612?s=96&d=mm&r=g\",\"caption\":\"Zershaaneh Qureshi\"},\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/zershaaneh-qureshi-8744131b4\\\/\"],\"url\":\"https:\\\/\\\/80000hours.org\\\/author\\\/zershaaneh-qureshi\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Why AI risks are the world\u2019s most pressing problems | 80,000 Hours","description":"AGI could rapidly transform the world, posing existential risks to humanity. Here\u2019s why working on AI risks could be the most important use of your career.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/","og_locale":"en_US","og_type":"article","og_title":"The risks of advanced AI","og_description":"Could working on AI risks be the highest-impact career choice today? Explore why AI may trigger rapid, dramatic societal change \u2014 and what you can do about it.","og_url":"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/","og_site_name":"80,000 Hours","article_publisher":"https:\/\/www.facebook.com\/80000Hours","article_modified_time":"2026-04-22T16:56:25+00:00","og_image":[{"url":"https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/Neural_network_-_Midjourney_and_Grok.png","width":1024,"height":1024,"type":"image\/png"}],"twitter_card":"summary_large_image","twitter_site":"@80000hours","twitter_misc":{"Est. reading time":"47 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/#article","isPartOf":{"@id":"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/"},"author":{"name":"Zershaaneh Qureshi","@id":"https:\/\/80000hours.org\/#\/schema\/person\/46f90c575aa95e68cce975168cf4f0f5"},"headline":"Advanced AI poses the world&#8217;s most pressing problems. Here&#8217;s&nbsp;why.","datePublished":"2026-02-24T06:48:11+00:00","dateModified":"2026-04-22T16:56:25+00:00","mainEntityOfPage":{"@id":"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/"},"wordCount":9605,"publisher":{"@id":"https:\/\/80000hours.org\/#organization"},"image":{"@id":"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/#primaryimage"},"thumbnailUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/Neural_network_-_Midjourney_and_Grok.png","articleSection":["AI","World problems"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/","url":"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/","name":"Why AI risks are the world\u2019s most pressing problems | 80,000 Hours","isPartOf":{"@id":"https:\/\/80000hours.org\/#website"},"primaryImageOfPage":{"@id":"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/#primaryimage"},"image":{"@id":"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/#primaryimage"},"thumbnailUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/Neural_network_-_Midjourney_and_Grok.png","datePublished":"2026-02-24T06:48:11+00:00","dateModified":"2026-04-22T16:56:25+00:00","description":"AGI could rapidly transform the world, posing existential risks to humanity. Here\u2019s why working on AI risks could be the most important use of your career.","breadcrumb":{"@id":"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/#primaryimage","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/Neural_network_-_Midjourney_and_Grok.png","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2026\/02\/Neural_network_-_Midjourney_and_Grok.png","width":1024,"height":1024,"caption":"Midjourney; prompt suggested by Grok, Public domain, via Wikimedia Commons"},{"@type":"BreadcrumbList","@id":"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/80000hours.org\/"},{"@type":"ListItem","position":2,"name":"Advanced AI poses the world&#8217;s most pressing problems. Here&#8217;s&nbsp;why."}]},{"@type":"WebSite","@id":"https:\/\/80000hours.org\/#website","url":"https:\/\/80000hours.org\/","name":"80,000 Hours","description":"","publisher":{"@id":"https:\/\/80000hours.org\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/80000hours.org\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/80000hours.org\/#organization","name":"80,000 Hours","url":"https:\/\/80000hours.org\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","width":1500,"height":785,"caption":"80,000 Hours"},"image":{"@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/80000Hours","https:\/\/x.com\/80000hours","https:\/\/www.youtube.com\/user\/eightythousandhours"]},{"@type":"Person","@id":"https:\/\/80000hours.org\/#\/schema\/person\/46f90c575aa95e68cce975168cf4f0f5","name":"Zershaaneh Qureshi","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/6cd1e78a791c368b871b2dc5101c12663bd2f74a621ec3ddaef2dfc9f5078612?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/6cd1e78a791c368b871b2dc5101c12663bd2f74a621ec3ddaef2dfc9f5078612?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/6cd1e78a791c368b871b2dc5101c12663bd2f74a621ec3ddaef2dfc9f5078612?s=96&d=mm&r=g","caption":"Zershaaneh Qureshi"},"sameAs":["https:\/\/www.linkedin.com\/in\/zershaaneh-qureshi-8744131b4\/"],"url":"https:\/\/80000hours.org\/author\/zershaaneh-qureshi\/"}]}},"_links":{"self":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile\/94492","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile"}],"about":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/types\/problem_profile"}],"author":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/users\/471"}],"version-history":[{"count":25,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile\/94492\/revisions"}],"predecessor-version":[{"id":96137,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile\/94492\/revisions\/96137"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media\/94655"}],"wp:attachment":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media?parent=94492"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/categories?post=94492"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}