{"id":87570,"date":"2024-09-16T11:50:23","date_gmt":"2024-09-16T11:50:23","guid":{"rendered":"https:\/\/80000hours.org\/?p=87570"},"modified":"2025-10-03T15:49:19","modified_gmt":"2025-10-03T15:49:19","slug":"why-experts-and-forecasters-disagree-about-ai-risk","status":"publish","type":"post","link":"https:\/\/80000hours.org\/2024\/09\/why-experts-and-forecasters-disagree-about-ai-risk\/","title":{"rendered":"Why experts and forecasters disagree about AI&nbsp;risk"},"content":{"rendered":"<p>This week we&#8217;re highlighting:<\/p>\n<ul>\n<li>Our new interview with <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/ezra-karger-forecasting-existential-risks\/\">Ezra Karger on what superforecasters and experts think about existential risks<\/a><\/li>\n<li>The Forecasting Research Institute&#8217;s report on its <a href=\"https:\/\/forecastingresearch.org\/xpt\">Existential Risk Persuasion Tournament results<\/a><\/li>\n<li>Our problem profile on <a href=\"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/\">preventing an AI-related catastrophe<\/a> <\/li>\n<\/ul>\n<p>The idea this week: <strong>even some sceptics of AI risk think there&#8217;s a real chance of a catastrophe in the next 1,000 years<\/strong>.<\/p>\n<p>That was one of many thought-provoking conclusions that came up when I spoke with economist Ezra Karger about his work with the Forecasting Research Institute (FRI) on understanding disagreements about <a href=\"https:\/\/80000hours.org\/articles\/existential-risks\/\">existential risk<\/a>.<\/p>\n<p>It&#8217;s hard to get to a consensus on the level of risk we face from AI. So FRI conducted the <a href=\"https:\/\/forecastingresearch.org\/xpt\">Existential Risk Persuasion Tournament<\/a> to investigate these disagreements and find out whether they could be resolved.<\/p>\n<p>The interview covers a lot of issues, but here are some key details that stood out on the topic of AI risk:<\/p>\n<ul>\n<li>Domain experts in AI estimated a <strong>3% chance of <a href=\"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/\">AI-caused human extinction<\/a><\/strong> by 2100 on average, while <strong><a href=\"https:\/\/en.wikipedia.org\/wiki\/Superforecaster\">superforecasters<\/a> put it at just 0.38%<\/strong>.<\/li>\n<li>Both groups agreed on <strong>a high likelihood of &#8220;powerful AI&#8221; being developed by 2100<\/strong> (around 90%).<\/li>\n<li>Even AI risk sceptics saw a 30% chance of catastrophic AI outcomes <strong>over a 1,000-year timeframe.<\/strong><\/li>\n<li>But the groups <strong>showed little convergence<\/strong> after extensive debate, suggesting some deep-rooted disagreements.<\/li>\n<\/ul>\n<p>Ezra&#8217;s research found some <strong>key differences in how these groups view the world<\/strong>:<\/p>\n<ul>\n<li>Sceptics tend to see change as gradual, while concerned experts anticipate more abrupt shifts.<\/li>\n<li>There were divergent views on humanity&#8217;s ability to <a href=\"https:\/\/80000hours.org\/career-reviews\/ai-policy-and-strategy\/\">coordinate and regulate AI development<\/a>.<\/li>\n<li>Sceptics generally view the world as more resilient to catastrophic risks.<\/li>\n<\/ul>\n<p>You can check out the section of the interview that discusses <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/ezra-karger-forecasting-existential-risks\/#hypotheses-about-stark-differences-in-views-of-ai-risk-015141\">Ezra&#8217;s hypotheses about stark the reason behind these differences in views of AI risk<\/a>.<\/p>\n<p>There were a lot of important takeaways from the interview for me. First, I&#8217;m excited to see this kind of novel work applying interesting techniques to try to resolve important disagreements. I think there&#8217;s probably a lot more to be learned by iterating on these methods and applying them in other domains.<\/p>\n<p>Second, I think it demonstrates an admirable willingness to learn from disagreements, rather than just engaging in unproductive public fights.<\/p>\n<p>And third, I think it highlights that even though we can improve our understanding of these issues, some of the most important issues are going to remain highly uncertain. And we&#8217;re going to need to figure out how to proceed \u2014 in our careers, in our lives, and as a civilisation \u2014 in the face of that uncertainty.<\/p>\n<p>You can find the full interview on our <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/ezra-karger-forecasting-existential-risks\/#hypotheses-about-stark-differences-in-views-of-ai-risk-015141\">website<\/a>, <a href=\"https:\/\/podcasts.apple.com\/us\/podcast\/80-000-hours-podcast\/id1245002988\">Apple Podcasts<\/a>, <a href=\"https:\/\/www.youtube.com\/watch?v=1WQzQ5hhz7k&amp;list=PL-BRtcBm4Yj4aKn72p4PjyqHh0ZQFdI1A&amp;index=1\">YouTube<\/a> <a href=\"https:\/\/open.spotify.com\/episode\/3AEqj0Nx5IpSt5TmadO9t1\">Spotify<\/a>, and elsewhere. Feel free to share it with anyone who might find it useful.<\/p>\n<div class=\"well bg-gray-lighter margin-bottom margin-top padding-top-small padding-bottom-small\">\n<p>This blog post was first released to our newsletter subscribers.<\/p>\n<p><strong>Join over 500,000 newsletter subscribers<\/strong> who get content like this in their inboxes <span class=\"ab-90-var hidden\">regularly<\/span><span class=\"ab-90-og\">weekly<\/span> \u2014 and we&#8217;ll also mail you a free book!<\/p>\n<form data-80k-object-id=\"\" data-80k-form-action=\"newsletter__subscribe\" action=\"\/\" method=\"post\" class=\"form-newsletter-signup form-newsletter-signup-step-1 margin-bottom-smaller\">\n<div class=\"mc-field-group input-group compact-input-group \"> <input type=\"email\" value=\"\" name=\"email\" required class=\"form-control email\" placeholder=\"Email address\" id=\"input_email\" pattern=\"[a-zA-Z0-9._+\\-]+@(?!(gmial\\.com|gnail\\.com|gmai\\.com|gmal\\.com|gmali\\.com|gamil\\.com|gmail\\.co|gmail\\.con|gmail\\.om|yahooo\\.com|yaho\\.com|outlok\\.com|outloo\\.com|hotmial\\.com|hotmail\\.con|hmail\\.com|yopmail\\.com|discardmail\\.com)$)[a-zA-Z0-9.\\-]+\\.[a-zA-Z]{2,}\" title=\"Please enter a valid email address (e.g. user@example.com)\" > <span class=\"submit input-group-btn input-group-btn-right\"> <input type=\"submit\" id=\"mc-embedded-subscribe\" value=\"GET RESEARCH UPDATES\" class=\"btn btn-primary \" \/> <\/span> <\/div>\n<div> <input name=\"_eightyk_action\" value=\"mailchimp_add_subscriber\" type=\"hidden\"> <input name=\"redirect_path_after_step_2\" value=\"\/newsletter\/welcome\/\" type=\"hidden\"> <\/div>\n<div style=\"position: absolute; left: -5000px;\"> <input type=\"text\" name=\"b_abc12f58bbe8075560abdc5b7_43bc1ae55c\" tabindex=\"-1\" value=\"\"> <\/div>\n<\/form>\n<\/div>\n<p><strong>Learn more<\/strong>:<\/p>\n<ul>\n<li>Rohin Shah on <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/rohin-shah-deepmind-doomers-and-doubters\/\">DeepMind and trying to fairly hear out both AI doomers and doubters<\/a><\/li>\n<li>Phil Tetlock on <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/prof-tetlock-predicting-the-future\/\">predicting catastrophes, why keep your politics secret, and when experts know more than you<\/a><\/li>\n<li>Spencer Greenberg <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/spencer-greenberg-money-happiness-hype-value\/\">on causation without correlation, money and happiness, lightgassing, hype vs value, and more<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":418,"featured_media":87572,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1315,1388,368,1390,1371,1391,1362],"class_list":["post-87570","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-career-planning","category-communication-career-paths","category-existential-risk","category-government-policy-career-paths","category-philosophy","category-research-in-relevant-areas","category-skills-skill-building-and-career-capital-2"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Why experts and forecasters disagree about AI risk | 80,000 Hours<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/80000hours.org\/2024\/09\/why-experts-and-forecasters-disagree-about-ai-risk\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Why experts and forecasters disagree about AI risk | 80,000 Hours\" \/>\n<meta property=\"og:description\" content=\"This week we&#039;re highlighting: Our new interview with Ezra Karger on what superforecasters and experts think about existential risks\nThe Forecasting Research Institute&#039;s report on its Existential Risk Persuasion Tournament results\nOur problem profile on preventing an AI-related catastrophe The idea this week: even some sceptics of AI risk think there&#039;s a real chance of a catastrophe in the next 1,000 years. That was one of many thought-provoking conclusions that came up when I spoke with economist Ezra Karger about his work with the Forecasting Research Institute (FRI) on understanding disagreements about existential risk. It&#039;s hard to get to a consensus on the level of risk we face from AI. So FRI conducted the Existential Risk Persuasion Tournament to investigate these disagreements and find out whether they could be resolved.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/80000hours.org\/2024\/09\/why-experts-and-forecasters-disagree-about-ai-risk\/\" \/>\n<meta property=\"og:site_name\" content=\"80,000 Hours\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/80000Hours\" \/>\n<meta property=\"article:published_time\" content=\"2024-09-16T11:50:23+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-10-03T15:49:19+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/80000hours.org\/wp-content\/uploads\/2024\/09\/2880px-Duisburg_Landschaftspark_Duisburg-Nord_Hochofen_5_Halle_-_2024_-_4164_kreativ_4-scaled.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1920\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Luisa Rodriguez\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@80000hours\" \/>\n<meta name=\"twitter:site\" content=\"@80000hours\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Luisa Rodriguez\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/2024\\\/09\\\/why-experts-and-forecasters-disagree-about-ai-risk\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/2024\\\/09\\\/why-experts-and-forecasters-disagree-about-ai-risk\\\/\"},\"author\":{\"name\":\"Luisa Rodriguez\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/person\\\/4d67021faf88967a8832baeabcef572c\"},\"headline\":\"Why experts and forecasters disagree about AI&nbsp;risk\",\"datePublished\":\"2024-09-16T11:50:23+00:00\",\"dateModified\":\"2025-10-03T15:49:19+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/2024\\\/09\\\/why-experts-and-forecasters-disagree-about-ai-risk\\\/\"},\"wordCount\":531,\"publisher\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/2024\\\/09\\\/why-experts-and-forecasters-disagree-about-ai-risk\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2024\\\/09\\\/2880px-Duisburg_Landschaftspark_Duisburg-Nord_Hochofen_5_Halle_-_2024_-_4164_kreativ_4-scaled.jpg\",\"articleSection\":[\"Career planning\",\"Communication\",\"Existential risk\",\"Government &amp; policy\",\"Philosophy\",\"Research in relevant areas\",\"Skills\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/2024\\\/09\\\/why-experts-and-forecasters-disagree-about-ai-risk\\\/\",\"url\":\"https:\\\/\\\/80000hours.org\\\/2024\\\/09\\\/why-experts-and-forecasters-disagree-about-ai-risk\\\/\",\"name\":\"Why experts and forecasters disagree about AI risk | 80,000 Hours\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/2024\\\/09\\\/why-experts-and-forecasters-disagree-about-ai-risk\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/2024\\\/09\\\/why-experts-and-forecasters-disagree-about-ai-risk\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2024\\\/09\\\/2880px-Duisburg_Landschaftspark_Duisburg-Nord_Hochofen_5_Halle_-_2024_-_4164_kreativ_4-scaled.jpg\",\"datePublished\":\"2024-09-16T11:50:23+00:00\",\"dateModified\":\"2025-10-03T15:49:19+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/2024\\\/09\\\/why-experts-and-forecasters-disagree-about-ai-risk\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/80000hours.org\\\/2024\\\/09\\\/why-experts-and-forecasters-disagree-about-ai-risk\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/2024\\\/09\\\/why-experts-and-forecasters-disagree-about-ai-risk\\\/#primaryimage\",\"url\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2024\\\/09\\\/2880px-Duisburg_Landschaftspark_Duisburg-Nord_Hochofen_5_Halle_-_2024_-_4164_kreativ_4-scaled.jpg\",\"contentUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2024\\\/09\\\/2880px-Duisburg_Landschaftspark_Duisburg-Nord_Hochofen_5_Halle_-_2024_-_4164_kreativ_4-scaled.jpg\",\"width\":2560,\"height\":1920,\"caption\":\"By Dietmar Rabich, CC BY-SA 4.0\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/2024\\\/09\\\/why-experts-and-forecasters-disagree-about-ai-risk\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/80000hours.org\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Why experts and forecasters disagree about AI&nbsp;risk\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#website\",\"url\":\"https:\\\/\\\/80000hours.org\\\/\",\"name\":\"80,000 Hours\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/80000hours.org\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\",\"name\":\"80,000 Hours\",\"url\":\"https:\\\/\\\/80000hours.org\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2018\\\/07\\\/og-logo_0.png\",\"contentUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2018\\\/07\\\/og-logo_0.png\",\"width\":1500,\"height\":785,\"caption\":\"80,000 Hours\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/80000Hours\",\"https:\\\/\\\/x.com\\\/80000hours\",\"https:\\\/\\\/www.youtube.com\\\/user\\\/eightythousandhours\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/person\\\/4d67021faf88967a8832baeabcef572c\",\"name\":\"Luisa Rodriguez\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/338d01690d24c87e4e9e63179f3ea168f6d222e47a9f4804a7957353b754b0b4?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/338d01690d24c87e4e9e63179f3ea168f6d222e47a9f4804a7957353b754b0b4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/338d01690d24c87e4e9e63179f3ea168f6d222e47a9f4804a7957353b754b0b4?s=96&d=mm&r=g\",\"caption\":\"Luisa Rodriguez\"},\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/luisa-rodriguez-837835bb\\\/\"],\"url\":\"https:\\\/\\\/80000hours.org\\\/author\\\/luisa-rodriguez\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Why experts and forecasters disagree about AI risk | 80,000 Hours","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/80000hours.org\/2024\/09\/why-experts-and-forecasters-disagree-about-ai-risk\/","og_locale":"en_US","og_type":"article","og_title":"Why experts and forecasters disagree about AI risk | 80,000 Hours","og_description":"This week we&#039;re highlighting: Our new interview with Ezra Karger on what superforecasters and experts think about existential risks\nThe Forecasting Research Institute&#039;s report on its Existential Risk Persuasion Tournament results\nOur problem profile on preventing an AI-related catastrophe The idea this week: even some sceptics of AI risk think there&#039;s a real chance of a catastrophe in the next 1,000 years. That was one of many thought-provoking conclusions that came up when I spoke with economist Ezra Karger about his work with the Forecasting Research Institute (FRI) on understanding disagreements about existential risk. It&#039;s hard to get to a consensus on the level of risk we face from AI. So FRI conducted the Existential Risk Persuasion Tournament to investigate these disagreements and find out whether they could be resolved.","og_url":"https:\/\/80000hours.org\/2024\/09\/why-experts-and-forecasters-disagree-about-ai-risk\/","og_site_name":"80,000 Hours","article_publisher":"https:\/\/www.facebook.com\/80000Hours","article_published_time":"2024-09-16T11:50:23+00:00","article_modified_time":"2025-10-03T15:49:19+00:00","og_image":[{"width":2560,"height":1920,"url":"https:\/\/80000hours.org\/wp-content\/uploads\/2024\/09\/2880px-Duisburg_Landschaftspark_Duisburg-Nord_Hochofen_5_Halle_-_2024_-_4164_kreativ_4-scaled.jpg","type":"image\/jpeg"}],"author":"Luisa Rodriguez","twitter_card":"summary_large_image","twitter_creator":"@80000hours","twitter_site":"@80000hours","twitter_misc":{"Written by":"Luisa Rodriguez","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/80000hours.org\/2024\/09\/why-experts-and-forecasters-disagree-about-ai-risk\/#article","isPartOf":{"@id":"https:\/\/80000hours.org\/2024\/09\/why-experts-and-forecasters-disagree-about-ai-risk\/"},"author":{"name":"Luisa Rodriguez","@id":"https:\/\/80000hours.org\/#\/schema\/person\/4d67021faf88967a8832baeabcef572c"},"headline":"Why experts and forecasters disagree about AI&nbsp;risk","datePublished":"2024-09-16T11:50:23+00:00","dateModified":"2025-10-03T15:49:19+00:00","mainEntityOfPage":{"@id":"https:\/\/80000hours.org\/2024\/09\/why-experts-and-forecasters-disagree-about-ai-risk\/"},"wordCount":531,"publisher":{"@id":"https:\/\/80000hours.org\/#organization"},"image":{"@id":"https:\/\/80000hours.org\/2024\/09\/why-experts-and-forecasters-disagree-about-ai-risk\/#primaryimage"},"thumbnailUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2024\/09\/2880px-Duisburg_Landschaftspark_Duisburg-Nord_Hochofen_5_Halle_-_2024_-_4164_kreativ_4-scaled.jpg","articleSection":["Career planning","Communication","Existential risk","Government &amp; policy","Philosophy","Research in relevant areas","Skills"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/80000hours.org\/2024\/09\/why-experts-and-forecasters-disagree-about-ai-risk\/","url":"https:\/\/80000hours.org\/2024\/09\/why-experts-and-forecasters-disagree-about-ai-risk\/","name":"Why experts and forecasters disagree about AI risk | 80,000 Hours","isPartOf":{"@id":"https:\/\/80000hours.org\/#website"},"primaryImageOfPage":{"@id":"https:\/\/80000hours.org\/2024\/09\/why-experts-and-forecasters-disagree-about-ai-risk\/#primaryimage"},"image":{"@id":"https:\/\/80000hours.org\/2024\/09\/why-experts-and-forecasters-disagree-about-ai-risk\/#primaryimage"},"thumbnailUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2024\/09\/2880px-Duisburg_Landschaftspark_Duisburg-Nord_Hochofen_5_Halle_-_2024_-_4164_kreativ_4-scaled.jpg","datePublished":"2024-09-16T11:50:23+00:00","dateModified":"2025-10-03T15:49:19+00:00","breadcrumb":{"@id":"https:\/\/80000hours.org\/2024\/09\/why-experts-and-forecasters-disagree-about-ai-risk\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/80000hours.org\/2024\/09\/why-experts-and-forecasters-disagree-about-ai-risk\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/2024\/09\/why-experts-and-forecasters-disagree-about-ai-risk\/#primaryimage","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2024\/09\/2880px-Duisburg_Landschaftspark_Duisburg-Nord_Hochofen_5_Halle_-_2024_-_4164_kreativ_4-scaled.jpg","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2024\/09\/2880px-Duisburg_Landschaftspark_Duisburg-Nord_Hochofen_5_Halle_-_2024_-_4164_kreativ_4-scaled.jpg","width":2560,"height":1920,"caption":"By Dietmar Rabich, CC BY-SA 4.0"},{"@type":"BreadcrumbList","@id":"https:\/\/80000hours.org\/2024\/09\/why-experts-and-forecasters-disagree-about-ai-risk\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/80000hours.org\/"},{"@type":"ListItem","position":2,"name":"Why experts and forecasters disagree about AI&nbsp;risk"}]},{"@type":"WebSite","@id":"https:\/\/80000hours.org\/#website","url":"https:\/\/80000hours.org\/","name":"80,000 Hours","description":"","publisher":{"@id":"https:\/\/80000hours.org\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/80000hours.org\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/80000hours.org\/#organization","name":"80,000 Hours","url":"https:\/\/80000hours.org\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","width":1500,"height":785,"caption":"80,000 Hours"},"image":{"@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/80000Hours","https:\/\/x.com\/80000hours","https:\/\/www.youtube.com\/user\/eightythousandhours"]},{"@type":"Person","@id":"https:\/\/80000hours.org\/#\/schema\/person\/4d67021faf88967a8832baeabcef572c","name":"Luisa Rodriguez","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/338d01690d24c87e4e9e63179f3ea168f6d222e47a9f4804a7957353b754b0b4?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/338d01690d24c87e4e9e63179f3ea168f6d222e47a9f4804a7957353b754b0b4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/338d01690d24c87e4e9e63179f3ea168f6d222e47a9f4804a7957353b754b0b4?s=96&d=mm&r=g","caption":"Luisa Rodriguez"},"sameAs":["https:\/\/www.linkedin.com\/in\/luisa-rodriguez-837835bb\/"],"url":"https:\/\/80000hours.org\/author\/luisa-rodriguez\/"}]}},"_links":{"self":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/posts\/87570","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/users\/418"}],"replies":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/comments?post=87570"}],"version-history":[{"count":0,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/posts\/87570\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media\/87572"}],"wp:attachment":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media?parent=87570"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/categories?post=87570"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}