{"id":89689,"date":"2025-04-11T19:18:20","date_gmt":"2025-04-11T19:18:20","guid":{"rendered":"https:\/\/80000hours.org\/?p=89689"},"modified":"2025-10-26T11:35:32","modified_gmt":"2025-10-26T11:35:32","slug":"work-on-ai-risks","status":"publish","type":"post","link":"https:\/\/80000hours.org\/2025\/04\/work-on-ai-risks\/","title":{"rendered":"The case for prioritising AI&nbsp;risks"},"content":{"rendered":"<p>Within 5 years, there&#8217;s a <a href=\"https:\/\/80000hours.org\/agi\/guide\/when-will-agi-arrive\/\">real chance<\/a> that AI systems will be created that cause <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/tom-davidson-how-quickly-ai-could-transform-the-world\/\">explosive technological and economic change<\/a>. This would increase the risk of disasters like <a href=\"https:\/\/80000hours.org\/problem-profiles\/great-power-conflict\/#AI-war\">war between US and China<\/a>, <a href=\"https:\/\/80000hours.org\/problem-profiles\/ai-enabled-power-grabs\/\">concentration of power in a small minority<\/a>, or even <a href=\"https:\/\/80000hours.org\/problem-profiles\/gradual-disempowerment\/\">total loss of human control over the future<\/a>.<\/p>\n<p>Many people \u2014 with a diverse range of skills and experience \u2014 are urgently needed to help mitigate these risks.<\/p>\n<p>I think you should consider making this the focus of your career.<\/p>\n<p>This article explains why.<\/p>\n<div class=\"well bg-gray-lighter margin-bottom margin-top padding-top-small padding-bottom-small\">\n<h2><span id=\"get-our-guide-to-high-impact-careers-in-the-age-of-agi\" class=\"toc-anchor\"><\/span>Get our guide to high-impact careers in the age of AGI<\/h2>\n<p>We&#8217;re writing a guide on how to use your career to help make AGI well. Join our newsletter to get updates:<\/p>\n<form data-80k-object-id=\"\" data-80k-form-action=\"newsletter__subscribe\" action=\"\/\" method=\"post\" class=\"form-newsletter-signup form-newsletter-signup-step-1 margin-bottom-smaller\">\n<div class=\"mc-field-group input-group compact-input-group \"> <input type=\"email\" value=\"\" name=\"email\" required class=\"form-control email\" placeholder=\"Email address\" id=\"input_email\" pattern=\"[a-zA-Z0-9._+\\-]+@(?!(gmial\\.com|gnail\\.com|gmai\\.com|gmal\\.com|gmali\\.com|gamil\\.com|gmail\\.co|gmail\\.con|gmail\\.om|yahooo\\.com|yaho\\.com|outlok\\.com|outloo\\.com|hotmial\\.com|hotmail\\.con|hmail\\.com|yopmail\\.com|discardmail\\.com)$)[a-zA-Z0-9.\\-]+\\.[a-zA-Z]{2,}\" title=\"Please enter a valid email address (e.g. user@example.com)\" > <span class=\"submit input-group-btn input-group-btn-right\"> <input type=\"submit\" id=\"mc-embedded-subscribe\" value=\"Subscribe\" class=\"btn btn-primary \" \/> <\/span> <\/div>\n<div> <input name=\"_eightyk_action\" value=\"mailchimp_add_subscriber\" type=\"hidden\"> <input name=\"_eightyk_mailchimp_interest_groups\" value=\"3cdd6e7e28\" type=\"hidden\"> <input name=\"redirect_path_after_step_2\" value=\"\/newsletter\/welcome\/\" type=\"hidden\"> <\/div>\n<div style=\"position: absolute; left: -5000px;\"> <input type=\"text\" name=\"b_abc12f58bbe8075560abdc5b7_43bc1ae55c\" tabindex=\"-1\" value=\"\"> <\/div>\n<p> <input name=\"signup_campaign\" value=\"ai_call_to_arms\" type=\"hidden\"> <\/form>\n<p>And, if you&#8217;d like to work on reducing catastrophic risks from AI, apply to <a href=\"https:\/\/80000hours.org\/speak-with-us\/\">speak to us one-on-one<\/a>.<\/p>\n<\/div>\n<div id=\"toc_container\" class=\"toc_white no_bullets\"><p class=\"toc_title\">Table of Contents<\/p><ul class=\"toc_list\"><li><a href=\"#get-our-guide-to-high-impact-careers-in-the-age-of-agi\"><span class=\"toc_number toc_depth_1\">1<\/span> Get our guide to high-impact careers in the age of AGI<\/a><\/li><li><a href=\"#1-world-changing-ai-systems-could-come-much-sooner-than-people-expect\"><span class=\"toc_number toc_depth_1\">2<\/span> 1) World-changing AI systems could come much sooner than people expect<\/a><\/li><li><a href=\"#2-the-impact-on-society-could-be-explosive\"><span class=\"toc_number toc_depth_1\">3<\/span> 2) The impact on society could be explosive<\/a><\/li><li><a href=\"#3-advanced-ai-could-bring-enormous-dangers\"><span class=\"toc_number toc_depth_1\">4<\/span> 3) Advanced AI could bring enormous dangers<\/a><\/li><li><a href=\"#4-under-10000-people-work-full-time-reducing-the-risks\"><span class=\"toc_number toc_depth_1\">5<\/span> 4) Under 10,000 people work full-time reducing the risks<\/a><\/li><li><a href=\"#5-there-are-more-and-more-concrete-jobs-you-could-take\"><span class=\"toc_number toc_depth_1\">6<\/span> 5) There are more and more concrete jobs you could take<\/a><\/li><li><a href=\"#6-the-next-five-years-seem-crucial\"><span class=\"toc_number toc_depth_1\">7<\/span> 6) The next five years seem crucial<\/a><\/li><li><a href=\"#the-bottom-line\"><span class=\"toc_number toc_depth_1\">8<\/span> The bottom line<\/a><\/li><li><a href=\"#whats-next\"><span class=\"toc_number toc_depth_1\">9<\/span> What&#8217;s next?<\/a><\/li><\/ul><\/div>\n<h2><span id=\"1-world-changing-ai-systems-could-come-much-sooner-than-people-expect\" class=\"toc-anchor\"><\/span>1) World-changing AI systems could come much sooner than people expect<\/h2>\n<p>In an earlier article I explained why there&#8217;s a <a href=\"https:\/\/80000hours.org\/agi\/guide\/when-will-agi-arrive\/\">significant chance that AI could <\/a> contribute to scientific research or automate many jobs by 2030. Current systems can already do a lot, there are clear ways to continue to improve them in the coming years. <a href=\"https:\/\/80000hours.org\/2025\/03\/when-do-experts-expect-agi-to-arrive\/\">Forecasters and experts widely agree<\/a> that the probability of widespread disruption is much higher than it was even just a couple of years ago.<\/p>\n<figure class=\"wp-caption\" >\n<img decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/04\/Screenshot-2025-04-28-at-11.18.59.png\" alt=\"Graph of lengths of tasks AIs updated in April 2025 for o3\"><figcaption > AI systems are rapidly becoming more autonomous, as measured by the <a href= \"https:\/\/benjamintodd.substack.com\/p\/the-most-important-graph-in-ai-right\">METR time horizon benchmark<\/a>. The most recent models, such as o3, seem to be on an even faster trend that started in 2024.<\/a><br \/>\n<\/figcaption><\/figure>\n<h2><span id=\"2-the-impact-on-society-could-be-explosive\" class=\"toc-anchor\"><\/span>2) The impact on society could be explosive<\/h2>\n<p>People say AI will be transformative, but few really get just how wild it could be. Here are three types of explosive impact we might see, which are now all supported by credible theoretical and empirical research:<\/p>\n<ul>\n<li><strong><a href=\"https:\/\/www.forethought.org\/research\/preparing-for-the-intelligence-explosion\">The intelligence explosion<\/a><\/strong>: it might only take a few years from developing advanced AI to having billions of AI remote workers, making cognitive labour available for pennies.<\/p>\n<\/li>\n<li><strong><a href=\"https:\/\/www.forethought.org\/research\/preparing-for-the-intelligence-explosion#the-technology-explosion\">The technological explosion<\/a><\/strong>: empirically informed estimates suggest that with sufficiently advanced AI <a href=\"https:\/\/www.forethought.org\/research\/preparing-for-the-intelligence-explosion\">100 years of technological progress in 10<\/a> is plausible. That means we could have <a href=\"https:\/\/darioamodei.com\/machines-of-loving-grace#1-biology-and-health\">advanced biotech<\/a>, robotics, novel political philosophies, and more arrive much sooner than commonly imagined.<\/p>\n<\/li>\n<li>\n<p><strong><a href=\"https:\/\/epoch.ai\/blog\/explosive-growth-from-ai-a-review-of-the-arguments\">The industrial explosion<\/a><\/strong>: if AI and robotics automate industrial production that would create a positive feedback loop, meaning production could plausibly end up doubling each year. Within a decade of reaching that growth rate, humanity would harvest all available solar energy on Earth and start to expand into space.<\/p>\n<\/li>\n<\/ul>\n<p>Along the way, we could see rapid progress on many key technological challenges \u2014 like curing cancer and developing green energy. But\u2026<\/p>\n<figure class=\"wp-caption\" >\n<img decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/04\/Human_vs_AI_research_wide.png\" alt=\"intelligence explosion\"><figcaption > The number of AI models is growing extremely fast. If they can start to substitute for scientific researchers, then the effective size of the scientific community would grow at that rate, leading to faster scientific progress. <a href=\"https:\/\/www.forethought.org\/research\/preparing-for-the-intelligence-explosion\">Preparing for the intelligence explosion by Forethought Research<\/a><br \/>\n<\/figcaption><\/figure>\n<h2><span id=\"3-advanced-ai-could-bring-enormous-dangers\" class=\"toc-anchor\"><\/span>3) Advanced AI could bring enormous dangers<\/h2>\n<p>We&#8217;ve <a href=\"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/\">written before<\/a> about how it might be hard to keep control of billions of AI systems thinking 10x faster than ourselves. But that&#8217;s only the first hurdle. The developments above could:<\/p>\n<ul>\n<li>Destabilise the world order (e.g. leading to conflict over <a href=\"https:\/\/www.youtube.com\/watch?v=bf1W-_x6Rvo\">Taiwan<\/a>)<\/li>\n<li>Enable the development of new weapons of mass destruction, like <a href=\"https:\/\/80000hours.org\/problem-profiles\/preventing-catastrophic-pandemics\/\">man-made viruses<\/a><\/li>\n<li>Empower governments (or even individual companies) to <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/tom-davidson-ai-enabled-human-power-grabs\/\">entrench their power<\/a><\/li>\n<li>Force us to face <a href=\"https:\/\/www.youtube.com\/watch?v=SjSl2re_Fm8\">civilisation-defining questions<\/a> about how to <a href=\"https:\/\/80000hours.org\/problem-profiles\/moral-status-digital-minds\/\">treat AI systems<\/a>, how to share the benefits of AI, and <a href=\"https:\/\/80000hours.org\/problem-profiles\/space-governance\/\">how to govern<\/a> an expansion into space.<\/li>\n<\/ul>\n<h2><span id=\"4-under-10000-people-work-full-time-reducing-the-risks\" class=\"toc-anchor\"><\/span>4) Under 10,000 people work full-time reducing the risks<\/h2>\n<p>Although it can feel like all anyone talks about is AI, only a few thousand people worldwide work full-time on navigating some of the most important aspects of the risks.<\/p>\n<p>This is tiny compared to the millions working on more established issues like cancer or <a href=\"https:\/\/80000hours.org\/problem-profiles\/climate-change\/\">climate change<\/a>, or the number of people working to deploy the technology as quickly as possible.<\/p>\n<p>If you switch to this issue now, you could be among the first 10,000 people helping humanity navigate what may be the one of the most important transitions in history.<\/p>\n<h2><span id=\"5-there-are-more-and-more-concrete-jobs-you-could-take\" class=\"toc-anchor\"><\/span>5) There are more and more concrete jobs you could take<\/h2>\n<p>A couple of years ago, there weren&#8217;t many clearly defined projects, positions or training routes to work on this issue. Today, there are more and more concrete ways to help. For example:<\/p>\n<ul>\n<li>See this <a href=\"https:\/\/www.openphilanthropy.org\/request-for-proposals-technical-ai-safety-research\/\">list of technical safety projects<\/a><\/li>\n<li>Join one of the many growing AI policy think tanks around the world<\/li>\n<li>Work to improve <a href=\"https:\/\/forecastingresearch.org\/\">forecasting<\/a> and <a href=\"https:\/\/epoch.ai\/\">data about AI<\/a> <\/li>\n<li>Build defences <a href=\"https:\/\/securebio.org\/resources\/\">against man-made viruses<\/a>, like better PPE and detection tools<\/li>\n<li>And <a href=\"https:\/\/80000hours.org\/agi\/guide\/summary\/\">more<\/a><\/li>\n<\/ul>\n<p>We&#8217;ve compiled a <a href=\"https:\/\/jobs.80000hours.org\/organisations?refinementList[problem_areas][0]=AI+safety+%26+policy&amp;refinementList[problem_areas][1]=Biosecurity+%26+pandemic+preparedness&amp;refinementList[problem_areas][1]=AI+technical+safety&amp;refinementList[problem_areas][2]=China-Western+relations&amp;refinementList[problem_areas][2]=AI+safety+%26+policy&amp;refinementList[problem_areas][3]=Forecastinghttps:\/\/jobs.80000hours.org\/organisations?refinementList[problem_areas][0]=AI+policy+%26+governance&amp;refinementList[problem_areas][3]=Forecasting&amp;refinementList[problem_areas][4]=China-Western+relations\">list of 30+ important organisations<\/a> in the space, <a href=\"https:\/\/jobs.80000hours.org\/jobs?refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy&amp;refinementList%5Btags_area%5D%5B1%5D=Biosecurity%20%26%20pandemic%20preparedness\">over 300 open jobs<\/a>, and <a href=\"https:\/\/jobs.80000hours.org\/collections\">lists of fellowships, courses, internships<\/a>, etc., to help you enter the field. Many of these are all well-paid too.<\/p>\n<p><strong>You don&#8217;t need to be technical or even focus directly on AI<\/strong> \u2014 we need people building organisations, in government, communications, and with many other skills. And AI is going to affect every aspect of society, so people with knowledge of all those aspects are needed (e.g. China, economics, pandemics, international governance, law, etc.).<\/p>\n<p>The field was also small until recently, so there&#8217;s not many people with deep expertise. That means it&#8217;s often possible to spend about 100 hours reading and speaking to people, and then find a job. (And if you have a quantitative background, it&#8217;s possible to get to the technical forefront in under a year.) Our team can help you <a href=\"https:\/\/80000hours.org\/speak-with-us\/\">figure out how to transition<\/a>.<\/p>\n<h2><span id=\"6-the-next-five-years-seem-crucial\" class=\"toc-anchor\"><\/span>6) The next five years seem crucial<\/h2>\n<p><a href=\"https:\/\/80000hours.org\/agi\/guide\/when-will-agi-arrive\/#iii-why-the-next-5-years-are-crucial\">I&#8217;ve argued<\/a> the chance of building powerful AI is unusually high between now and around 2030, and declines thereafter. This makes the next five years especially critical.<\/p>\n<p>That creates an additional reason to switch soon: if transformative AI emerges in the next five years, you&#8217;ll be part of one of the most important transitions in human history. If it doesn&#8217;t, you&#8217;ll have time to return to your previous path, while having learned about a technology that will still shape our world in significant ways.<\/p>\n<h2><span id=\"the-bottom-line\" class=\"toc-anchor\"><\/span>The bottom line<\/h2>\n<p><strong>If you&#8217;re able to find a role that fits, and that helps mitigate these risks (especially over the next 5\u201310 years), that&#8217;s probably the highest expected impact thing you can do.<\/strong><\/p>\n<p>But I don&#8217;t think <em>everyone<\/em> reading this should work on AI.<\/p>\n<ol>\n<li>You might not have the flexibility to make a large career change right now. (In that case, you could look to <a href=\"https:\/\/benjamintodd.substack.com\/p\/looks-like-there-are-some-good-funding\">donate<\/a>, <a href=\"https:\/\/www.cold-takes.com\/spreading-messages-to-help-with-the-most-important-century\/\">spread clear thinking<\/a> about the issue, or <a href=\"https:\/\/www.cold-takes.com\/jobs-that-can-help-with-the-most-important-century\/#other-things-you-can-do\">prepare to switch<\/a> when future opportunities arise.)<\/li>\n<li>There are other important problems, and you might have <a href=\"https:\/\/80000hours.org\/career-guide\/personal-fit\">far better fit<\/a> for a job focused on <a href=\"https:\/\/80000hours.org\/problem-profiles\/\">another issue<\/a>.<\/li>\n<li>You might be too concerned about the (definitely huge) uncertainties about how best to help or be less convinced by the arguments that it&#8217;s pressing. <\/li>\n<\/ol>\n<p>However, I&#8217;d encourage almost everyone interested in impactful careers to seriously consider it. And if you&#8217;re unsure you&#8217;ll be able to find something, keep in mind there&#8217;s a very wide range of approaches and opportunities, and they&#8217;re expanding all the time.<\/p>\n<div class=\"well bg-gray-lighter margin-bottom margin-top padding-top-small padding-bottom-small\">\n<h2><span id=\"whats-next\" class=\"toc-anchor\"><\/span>What&#8217;s next?<\/h2>\n<p>See all our resources on transformative AI, including articles, expert interviews, and our job board:<\/p>\n<p><a href=\"https:\/\/80000hours.org\/agi\/\" title=\"\" class=\"btn btn-primary\">View all resources<\/a><\/p>\n<p>We&#8217;re writing a new guide on how to use your career to help make AGI well. Join our newsletter to get updates:<\/p>\n<form data-80k-object-id=\"\" data-80k-form-action=\"newsletter__subscribe\" action=\"\/\" method=\"post\" class=\"form-newsletter-signup form-newsletter-signup-step-1 margin-bottom-smaller\">\n<div class=\"mc-field-group input-group compact-input-group \"> <input type=\"email\" value=\"\" name=\"email\" required class=\"form-control email\" placeholder=\"Email address\" id=\"input_email\" pattern=\"[a-zA-Z0-9._+\\-]+@(?!(gmial\\.com|gnail\\.com|gmai\\.com|gmal\\.com|gmali\\.com|gamil\\.com|gmail\\.co|gmail\\.con|gmail\\.om|yahooo\\.com|yaho\\.com|outlok\\.com|outloo\\.com|hotmial\\.com|hotmail\\.con|hmail\\.com|yopmail\\.com|discardmail\\.com)$)[a-zA-Z0-9.\\-]+\\.[a-zA-Z]{2,}\" title=\"Please enter a valid email address (e.g. user@example.com)\" > <span class=\"submit input-group-btn input-group-btn-right\"> <input type=\"submit\" id=\"mc-embedded-subscribe\" value=\"Subscribe\" class=\"btn btn-primary \" \/> <\/span> <\/div>\n<div> <input name=\"_eightyk_action\" value=\"mailchimp_add_subscriber\" type=\"hidden\"> <input name=\"_eightyk_mailchimp_interest_groups\" value=\"3cdd6e7e28\" type=\"hidden\"> <input name=\"redirect_path_after_step_2\" value=\"\/newsletter\/welcome\/\" type=\"hidden\"> <\/div>\n<div style=\"position: absolute; left: -5000px;\"> <input type=\"text\" name=\"b_abc12f58bbe8075560abdc5b7_43bc1ae55c\" tabindex=\"-1\" value=\"\"> <\/div>\n<p> <input name=\"signup_campaign\" value=\"ai_call_to_arms\" type=\"hidden\"> <\/form>\n<p>Finally, if you&#8217;d like to work on reducing risks from advanced AI, apply to speak with our <a href=\"https:\/\/80000hours.org\/speak-with-us\/?int_campaign=blog-post\">1-1 advising team<\/a>.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":"[fn 1]\r\n\r\nThank you to Cody Fenwick and Dewi Erwan for help with this article.\r\n\r\n[\/fn]\r\n"},"categories":[1353,1387,315,1315,368,1425,1362],"class_list":["post-89689","post","type-post","status-publish","format-standard","hentry","category-ai","category-ai-policy","category-career-advice-strategy","category-career-planning","category-existential-risk","category-global-priorities-research-career-paths","category-skills-skill-building-and-career-capital-2"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>The case for prioritising AI risks | 80,000 Hours<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/80000hours.org\/2025\/04\/work-on-ai-risks\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The case for prioritising AI risks | 80,000 Hours\" \/>\n<meta property=\"og:description\" content=\"Within 5 years, there&#039;s a real chance that AI systems will be created that cause explosive technological and economic change. This would increase the risk of disasters like war between US and China, concentration of power in a small minority, or even total loss of human control over the future. Many people \u2014 with a diverse range of skills and experience \u2014 are urgently needed to help mitigate these risks. I think you should consider making this the focus of your career.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/80000hours.org\/2025\/04\/work-on-ai-risks\/\" \/>\n<meta property=\"og:site_name\" content=\"80,000 Hours\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/80000Hours\" \/>\n<meta property=\"article:author\" content=\"https:\/\/www.facebook.com\/BenjaminJTodd\" \/>\n<meta property=\"article:published_time\" content=\"2025-04-11T19:18:20+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-10-26T11:35:32+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/04\/Screenshot-2025-04-28-at-11.18.59.png\" \/>\n<meta name=\"author\" content=\"Benjamin Todd\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@ben_j_todd\" \/>\n<meta name=\"twitter:site\" content=\"@80000hours\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Benjamin Todd\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/04\\\/work-on-ai-risks\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/04\\\/work-on-ai-risks\\\/\"},\"author\":{\"name\":\"Benjamin Todd\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/person\\\/a7453684a657f021649e5a325c644c1b\"},\"headline\":\"The case for prioritising AI&nbsp;risks\",\"datePublished\":\"2025-04-11T19:18:20+00:00\",\"dateModified\":\"2025-10-26T11:35:32+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/04\\\/work-on-ai-risks\\\/\"},\"wordCount\":1293,\"publisher\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/04\\\/work-on-ai-risks\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2025\\\/04\\\/Screenshot-2025-04-28-at-11.18.59.png\",\"articleSection\":[\"AI\",\"AI policy\",\"Career advice &amp; strategy\",\"Career planning\",\"Existential risk\",\"Global priorities research\",\"Skills\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/04\\\/work-on-ai-risks\\\/\",\"url\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/04\\\/work-on-ai-risks\\\/\",\"name\":\"The case for prioritising AI risks | 80,000 Hours\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/04\\\/work-on-ai-risks\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/04\\\/work-on-ai-risks\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2025\\\/04\\\/Screenshot-2025-04-28-at-11.18.59.png\",\"datePublished\":\"2025-04-11T19:18:20+00:00\",\"dateModified\":\"2025-10-26T11:35:32+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/04\\\/work-on-ai-risks\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/80000hours.org\\\/2025\\\/04\\\/work-on-ai-risks\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/04\\\/work-on-ai-risks\\\/#primaryimage\",\"url\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2025\\\/04\\\/Screenshot-2025-04-28-at-11.18.59.png\",\"contentUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2025\\\/04\\\/Screenshot-2025-04-28-at-11.18.59.png\",\"width\":2056,\"height\":1570},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/04\\\/work-on-ai-risks\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/80000hours.org\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"The case for prioritising AI&nbsp;risks\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#website\",\"url\":\"https:\\\/\\\/80000hours.org\\\/\",\"name\":\"80,000 Hours\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/80000hours.org\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\",\"name\":\"80,000 Hours\",\"url\":\"https:\\\/\\\/80000hours.org\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2018\\\/07\\\/og-logo_0.png\",\"contentUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2018\\\/07\\\/og-logo_0.png\",\"width\":1500,\"height\":785,\"caption\":\"80,000 Hours\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/80000Hours\",\"https:\\\/\\\/x.com\\\/80000hours\",\"https:\\\/\\\/www.youtube.com\\\/user\\\/eightythousandhours\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/person\\\/a7453684a657f021649e5a325c644c1b\",\"name\":\"Benjamin Todd\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e38e0ffb8a6e1935525be41b1e6d3bbce47b3613bb8891f6fad9ca2d5818d744?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e38e0ffb8a6e1935525be41b1e6d3bbce47b3613bb8891f6fad9ca2d5818d744?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e38e0ffb8a6e1935525be41b1e6d3bbce47b3613bb8891f6fad9ca2d5818d744?s=96&d=mm&r=g\",\"caption\":\"Benjamin Todd\"},\"description\":\"Benjamin is the president and cofounder of 80,000 Hours. He managed the organisation while it grew from a lecture, to a student society, to the organisation it is today. He also helped to get effective altruism started in Oxford in 2011. This page lists all of his writing at 80,000 Hours. See [some highlights](https:\\\/\\\/benjamintodd.org\\\/#writing). To get a biweekly update that includes Benjamin's new articles, join our newsletter: [form_newsletter] To follow all his other writing and research ideas, [follow him on Twitter](https:\\\/\\\/twitter.com\\\/ben_j_todd). To learn more about him, see [Benjamin's personal website](https:\\\/\\\/benjamintodd.org\\\/), [Instagram](https:\\\/\\\/www.instagram.com\\\/benbentodd\\\/) or [LinkedIn bio](https:\\\/\\\/www.linkedin.com\\\/in\\\/benjamin-j-todd\\\/).\",\"sameAs\":[\"https:\\\/\\\/benjamintodd.org\\\/\",\"https:\\\/\\\/www.facebook.com\\\/BenjaminJTodd\",\"https:\\\/\\\/www.instagram.com\\\/benbentodd\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/in\\\/benjamin-j-todd\\\/\",\"https:\\\/\\\/x.com\\\/ben_j_todd\",\"https:\\\/\\\/en.wikipedia.org\\\/wiki\\\/Benjamin_Todd\"],\"url\":\"https:\\\/\\\/80000hours.org\\\/author\\\/benjamin-todd\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"The case for prioritising AI risks | 80,000 Hours","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/80000hours.org\/2025\/04\/work-on-ai-risks\/","og_locale":"en_US","og_type":"article","og_title":"The case for prioritising AI risks | 80,000 Hours","og_description":"Within 5 years, there&#039;s a real chance that AI systems will be created that cause explosive technological and economic change. This would increase the risk of disasters like war between US and China, concentration of power in a small minority, or even total loss of human control over the future. Many people \u2014 with a diverse range of skills and experience \u2014 are urgently needed to help mitigate these risks. I think you should consider making this the focus of your career.","og_url":"https:\/\/80000hours.org\/2025\/04\/work-on-ai-risks\/","og_site_name":"80,000 Hours","article_publisher":"https:\/\/www.facebook.com\/80000Hours","article_author":"https:\/\/www.facebook.com\/BenjaminJTodd","article_published_time":"2025-04-11T19:18:20+00:00","article_modified_time":"2025-10-26T11:35:32+00:00","og_image":[{"url":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/04\/Screenshot-2025-04-28-at-11.18.59.png","type":"","width":"","height":""}],"author":"Benjamin Todd","twitter_card":"summary_large_image","twitter_creator":"@ben_j_todd","twitter_site":"@80000hours","twitter_misc":{"Written by":"Benjamin Todd","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/80000hours.org\/2025\/04\/work-on-ai-risks\/#article","isPartOf":{"@id":"https:\/\/80000hours.org\/2025\/04\/work-on-ai-risks\/"},"author":{"name":"Benjamin Todd","@id":"https:\/\/80000hours.org\/#\/schema\/person\/a7453684a657f021649e5a325c644c1b"},"headline":"The case for prioritising AI&nbsp;risks","datePublished":"2025-04-11T19:18:20+00:00","dateModified":"2025-10-26T11:35:32+00:00","mainEntityOfPage":{"@id":"https:\/\/80000hours.org\/2025\/04\/work-on-ai-risks\/"},"wordCount":1293,"publisher":{"@id":"https:\/\/80000hours.org\/#organization"},"image":{"@id":"https:\/\/80000hours.org\/2025\/04\/work-on-ai-risks\/#primaryimage"},"thumbnailUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/04\/Screenshot-2025-04-28-at-11.18.59.png","articleSection":["AI","AI policy","Career advice &amp; strategy","Career planning","Existential risk","Global priorities research","Skills"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/80000hours.org\/2025\/04\/work-on-ai-risks\/","url":"https:\/\/80000hours.org\/2025\/04\/work-on-ai-risks\/","name":"The case for prioritising AI risks | 80,000 Hours","isPartOf":{"@id":"https:\/\/80000hours.org\/#website"},"primaryImageOfPage":{"@id":"https:\/\/80000hours.org\/2025\/04\/work-on-ai-risks\/#primaryimage"},"image":{"@id":"https:\/\/80000hours.org\/2025\/04\/work-on-ai-risks\/#primaryimage"},"thumbnailUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/04\/Screenshot-2025-04-28-at-11.18.59.png","datePublished":"2025-04-11T19:18:20+00:00","dateModified":"2025-10-26T11:35:32+00:00","breadcrumb":{"@id":"https:\/\/80000hours.org\/2025\/04\/work-on-ai-risks\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/80000hours.org\/2025\/04\/work-on-ai-risks\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/2025\/04\/work-on-ai-risks\/#primaryimage","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/04\/Screenshot-2025-04-28-at-11.18.59.png","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/04\/Screenshot-2025-04-28-at-11.18.59.png","width":2056,"height":1570},{"@type":"BreadcrumbList","@id":"https:\/\/80000hours.org\/2025\/04\/work-on-ai-risks\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/80000hours.org\/"},{"@type":"ListItem","position":2,"name":"The case for prioritising AI&nbsp;risks"}]},{"@type":"WebSite","@id":"https:\/\/80000hours.org\/#website","url":"https:\/\/80000hours.org\/","name":"80,000 Hours","description":"","publisher":{"@id":"https:\/\/80000hours.org\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/80000hours.org\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/80000hours.org\/#organization","name":"80,000 Hours","url":"https:\/\/80000hours.org\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","width":1500,"height":785,"caption":"80,000 Hours"},"image":{"@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/80000Hours","https:\/\/x.com\/80000hours","https:\/\/www.youtube.com\/user\/eightythousandhours"]},{"@type":"Person","@id":"https:\/\/80000hours.org\/#\/schema\/person\/a7453684a657f021649e5a325c644c1b","name":"Benjamin Todd","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/e38e0ffb8a6e1935525be41b1e6d3bbce47b3613bb8891f6fad9ca2d5818d744?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/e38e0ffb8a6e1935525be41b1e6d3bbce47b3613bb8891f6fad9ca2d5818d744?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e38e0ffb8a6e1935525be41b1e6d3bbce47b3613bb8891f6fad9ca2d5818d744?s=96&d=mm&r=g","caption":"Benjamin Todd"},"description":"Benjamin is the president and cofounder of 80,000 Hours. He managed the organisation while it grew from a lecture, to a student society, to the organisation it is today. He also helped to get effective altruism started in Oxford in 2011. This page lists all of his writing at 80,000 Hours. See [some highlights](https:\/\/benjamintodd.org\/#writing). To get a biweekly update that includes Benjamin's new articles, join our newsletter: [form_newsletter] To follow all his other writing and research ideas, [follow him on Twitter](https:\/\/twitter.com\/ben_j_todd). To learn more about him, see [Benjamin's personal website](https:\/\/benjamintodd.org\/), [Instagram](https:\/\/www.instagram.com\/benbentodd\/) or [LinkedIn bio](https:\/\/www.linkedin.com\/in\/benjamin-j-todd\/).","sameAs":["https:\/\/benjamintodd.org\/","https:\/\/www.facebook.com\/BenjaminJTodd","https:\/\/www.instagram.com\/benbentodd\/","https:\/\/www.linkedin.com\/in\/benjamin-j-todd\/","https:\/\/x.com\/ben_j_todd","https:\/\/en.wikipedia.org\/wiki\/Benjamin_Todd"],"url":"https:\/\/80000hours.org\/author\/benjamin-todd\/"}]}},"_links":{"self":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/posts\/89689","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/comments?post=89689"}],"version-history":[{"count":0,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/posts\/89689\/revisions"}],"wp:attachment":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media?parent=89689"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/categories?post=89689"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}