{"id":88504,"date":"2025-01-03T10:32:49","date_gmt":"2025-01-03T10:32:49","guid":{"rendered":"https:\/\/80000hours.org\/?p=88504"},"modified":"2025-10-26T11:39:58","modified_gmt":"2025-10-26T11:39:58","slug":"what-happened-with-ai-2024","status":"publish","type":"post","link":"https:\/\/80000hours.org\/2025\/01\/what-happened-with-ai-2024\/","title":{"rendered":"What happened with AI in&nbsp;2024?"},"content":{"rendered":"<div id=\"toc_container\" class=\"toc_white no_bullets\"><p class=\"toc_title\">Table of Contents<\/p><ul class=\"toc_list\"><li><a href=\"#new-ai-models-and-capabilities\"><span class=\"toc_number toc_depth_1\">1<\/span> New AI models and capabilities<\/a><\/li><li><a href=\"#ai-helping-with-science\"><span class=\"toc_number toc_depth_1\">2<\/span> AI helping with science<\/a><\/li><li><a href=\"#some-key-developments-in-ai-risk-and-safety-research\"><span class=\"toc_number toc_depth_1\">3<\/span> Some key developments in AI risk and safety research<\/a><\/li><li><a href=\"#what-you-can-do\"><span class=\"toc_number toc_depth_1\">4<\/span> What you can do<\/a><\/li><\/ul><\/div>\n<p>The idea this week: <strong>despite claims of stagnation, AI research still advanced rapidly in 2024&#46;<\/strong><\/p>\n<p>Some people say <a href=\"https:\/\/edition.cnn.com\/2024\/11\/19\/business\/ai-chatgpt-nvidia-nightcap\/index.html\">AI research has plateaued<\/a>. But a lot of evidence from the last year points in the opposite direction:<\/p>\n<ul>\n<li>New capabilities were developed and emerged  <\/li>\n<li>Research indicates existing AI can accelerate science<\/li>\n<\/ul>\n<p>And at the same time, important findings about AI safety and risk came out (<a href=\"#some-key-developments-in-ai-risk-and-safety-research\">see below<\/a>).<\/p>\n<p>AI advances might still stall. Some <a href=\"https:\/\/www.nytimes.com\/2024\/12\/19\/technology\/artificial-intelligence-data-openai-google.html\">leaders in the field<\/a> have warned that a lack of good data, for example, may impede further capability growth, though <a href=\"https:\/\/epoch.ai\/blog\/can-ai-scaling-continue-through-2030\">others disagree<\/a>. Regardless, growth clearly hasn&#8217;t stopped <em>yet<\/em>.<\/p>\n<p>Meanwhile, the aggregate forecast on <a href=\"https:\/\/www.metaculus.com\/questions\/5121\/date-of-general-ai\/\">Metaculus<\/a> of when we&#8217;ll see the first &#8220;general&#8221; AI system \u2014 which would be highly capable across a wide range of tasks \u2014 is 2031&#46;<\/p>\n<p>All of this matters a lot, because AI poses potentially <a href=\"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/?source=email&amp;uni_id=*|UNI_ID|*&amp;email=*|EMAIL|*\">existential risks<\/a>. We think making sure AI goes well is a <a href=\"https:\/\/80000hours.org\/problem-profiles\/?source=email&amp;uni_id=*|UNI_ID|*&amp;email=*|EMAIL|*\">top pressing world problem<\/a>.<\/p>\n<p>If AI advances fast, this work is not only important but urgent.<\/p>\n<p>Here are some of the key developments in AI from the last year:<\/p>\n<h2><span id=\"new-ai-models-and-capabilities\" class=\"toc-anchor\"><\/span>New AI models and capabilities<\/h2>\n<p>OpenAI announced in late December that its <a href=\"https:\/\/www.youtube.com\/watch?v=SKBG1sqdyIU\">new model o3<\/a> achieved a large leap forward in capabilities. It builds on the o1 language model (also released in 2024), which has the ability to deliberate about its answers before responding. With this more advanced capability, o3 reportedly:<\/p>\n<ul>\n<li>Scored a <a href=\"https:\/\/arcprize.org\/blog\/oai-o3-pub-breakthrough\">breakthrough 87.5% on ARC-AGI<\/a>, a test designed to be particularly hard for leading AI systems  <\/li>\n<li>Pushed the frontier of AI software engineering, scoring 71.7% on a key benchmark using real tasks compared to 48.9% accuracy for o1  <\/li>\n<li>Achieved a 25% score on a new (and extremely challenging) <a href=\"https:\/\/epoch.ai\/frontiermath\">FrontierMath benchmark<\/a> \u2014 while previous leading AI models couldn&#8217;t get above 2%<\/li>\n<\/ul>\n<figure class=\"wp-caption\" >\n<img decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/01\/image2.jpg\" alt=\"Graph showing OpenAI's models exponential increase in scores on ARC-AGI benchmark.\"><figcaption >Scores on the ARC-AGI benchmark from OpenAI&#8217;s models since 2019. Chart from <a href=\"https:\/\/x.com\/goodside\/status\/1870243391814152544\">Riley Goodside<\/a> of ScaleAI.<\/figcaption><\/figure>\n<p>While not released publicly yet, it seems clear that o3 is the most capable language model we&#8217;ve seen. It still has many limitations and weaknesses, but it undermines claims that AI progress stalled in 2024&#46;<\/p>\n<p>It may be the most impressive advance in 2024, but the last year had many other major developments:<\/p>\n<ul>\n<li>AI video generation gained steam, as OpenAI released <a href=\"https:\/\/www.theguardian.com\/technology\/2024\/dec\/09\/openai-ai-video-generator-sora-publicly-available\">Sora<\/a> for public use and Google DeepMind launched <a href=\"https:\/\/techcrunch.com\/2024\/12\/16\/google-deepmind-unveils-a-new-video-model-to-rival-sora\/\">Veo<\/a>.  <\/li>\n<li>Google DeepMind released <a href=\"https:\/\/blog.google\/technology\/ai\/google-deepmind-isomorphic-alphafold-3-ai-model\/\">AlphaFold 3<\/a> \u2014 a successor to a <a href=\"https:\/\/www.nature.com\/articles\/d41586-024-03214-7\">Nobel Prize-winning AI system<\/a> \u2014 which can predict how proteins interact with DNA, RNA, and other structures at the molecular level.  <\/li>\n<li>Anthropic <a href=\"https:\/\/www.anthropic.com\/news\/3-5-models-and-computer-use\">introduced the capability<\/a> for its chatbot Claude to use your computer at your direction.  <\/li>\n<li>AI systems are increasingly able to take <a href=\"https:\/\/simonwillison.net\/2024\/Dec\/31\/llms-in-2024\/#multimodal-vision-is-common-audio-and-video-are-starting-to-emerge\">audio and visual inputs<\/a>, and larger amounts of text, while also engaging with users in voice mode.  <\/li>\n<li>By combining the models <a href=\"https:\/\/www.technologyreview.com\/2024\/07\/25\/1095315\/google-deepminds-ai-systems-can-now-solve-complex-math-problems\/\">AlphaProof and AlphaGeometry 2<\/a>, Google DeepMind was able to use AI to achieve silver medal performance in the International Mathematical Olympiad.  <\/li>\n<li>The Chinese company <a href=\"https:\/\/techcrunch.com\/2024\/12\/26\/deepseeks-new-ai-model-appears-to-be-one-of-the-best-open-challengers-yet\/\">DeepSeek<\/a> said that its newest model only cost $5.5 million to train \u2014 a dramatic decrease from the reported <a href=\"https:\/\/web.archive.org\/web\/20230418190335\/https:\/\/www.wired.com\/story\/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over\/\">$100 million<\/a> OpenAI spent training the comparably capable GPT-4.<\/li>\n<\/ul>\n<p>And there&#8217;s a lot more that could be included here&#33; We won&#8217;t be surprised if 2025 and 2026 see many more leaps forward in AI capabilities.<\/p>\n<h2><span id=\"ai-helping-with-science\" class=\"toc-anchor\"><\/span>AI helping with science<\/h2>\n<p>Recent research indicates that AI can help speed up scientific progress, including AI research itself:<\/p>\n<ul>\n<li>AI appears to have <a href=\"https:\/\/web.archive.org\/web\/20250129020321\/https:\/\/aidantr.github.io\/files\/AI_innovation.pdf\">improved top material science researchers&#8217; output<\/a> by around 80%, speeding up their ability to generate new ideas. (Update May 2025: This paper has been <a href=\"https:\/\/economics.mit.edu\/news\/assuring-accurate-research-record\">withdrawn from pubic discourse<\/a> due to concerns about the validity of the data. We no longer believe this paper is evidence of AI&#8217;s ability to accelerate science and innovation. See more details in footnote.) <\/li>\n<li>AI agents could <a href=\"https:\/\/metr.org\/AI_R_D_Evaluation_Report.pdf\">outperform human experts on ML research engineering tasks<\/a> when given two hours, though the humans take the lead over longer periods of time.  <\/li>\n<li>Researchers found that <a href=\"https:\/\/www.nature.com\/articles\/s41562-024-02046-9\">language models seem to be able to discover fundamental patterns in the neuroscientific literature<\/a>, making them better than human experts at predicting the outcomes of new studies in the field.  <\/li>\n<li>AI-discovered molecules appear <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S135964462400134X?via%3Dihub\">more likely to pass through phase I clinical trials<\/a>.<\/li>\n<\/ul>\n<figure class=\"wp-caption\" >\n<img decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/01\/image1.png\" alt=\"Graph showing AI models outperfor humans on ML research engineering tasks at 2 hours but not with longer periods of time.\"><figcaption >METR found that in a limited time window, LLMs can outperform humans on a sample of ML research engineering tasks.<\/figcaption><\/figure>\n<h2><span id=\"some-key-developments-in-ai-risk-and-safety-research\" class=\"toc-anchor\"><\/span>Some key developments in AI risk and safety research<\/h2>\n<p>Meanwhile, we&#8217;ve seen a mix of encouraging and worrying results in research on AI safety. Here are a few of the important publications this year:<\/p>\n<ul>\n<li>A recent paper from Anthropic found that <a href=\"https:\/\/www.anthropic.com\/research\/alignment-faking\">AI models can &#8220;fake&#8221; being aligned<\/a> with their developers&#8217; intentions when they&#8217;re told they&#8217;re being trained, only to abandon these behaviours when deployed.  <\/li>\n<li>Apollo Research found that under certain conditions, <a href=\"https:\/\/www.apolloresearch.ai\/research\/scheming-reasoning-evaluations\">OpenAI&#8217;s o1 model has the capability to scheme to deceive its developers<\/a> and even pretend to be <a href=\"https:\/\/x.com\/jkcarlsmith\/status\/1866232909558112508\">less capable than it really is<\/a>.  <\/li>\n<li>In a paper on &#8220;Sleeper Agents,&#8221; researchers discovered that <a href=\"https:\/\/www.anthropic.com\/research\/sleeper-agents-training-deceptive-llms-that-persist-through-safety-training\">AI models can have deceptive and unwanted behaviours that standard safety training won&#8217;t eliminate<\/a>.  <\/li>\n<li>A new approach to preventing AI misuse and improving alignment involves using &#8220;circuit breakers&#8221; to <a href=\"https:\/\/arxiv.org\/pdf\/2406.04313\">directly target internal representations of harmful outputs<\/a>, rather than simply trying to train models to refuse to produce harmful outputs.  <\/li>\n<li>Anthropic reported that it successfully identified millions of &#8220;features&#8221; \u2014 patterns of neurons that can be linked to human-understandable concepts \u2014 within one of its frontier AI models. This points the way to <a href=\"https:\/\/www.anthropic.com\/research\/mapping-mind-language-model\">better understand of how these systems actually work<\/a>.<\/li>\n<\/ul>\n<h2><span id=\"what-you-can-do\" class=\"toc-anchor\"><\/span>What you can do<\/h2>\n<p>These developments show the fast pace and potential risks of advancing AI. To help, you can:<\/p>\n<ul>\n<li>Learn about the <a href=\"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/?source=email&amp;uni_id=*|UNI_ID|*&amp;email=*|EMAIL|*\">the biggest risks AI poses<\/a>  <\/li>\n<li>Learn about how to <a href=\"https:\/\/www.cold-takes.com\/spreading-messages-to-help-with-the-most-important-century\/\">carefully communicate about this topic<\/a>  <\/li>\n<li>Learn how to <a href=\"https:\/\/www.oneusefulthing.org\/p\/15-times-to-use-ai-and-5-not-to\">use AI in your own work<\/a>  <\/li>\n<li>Consider switching your career to work in <a href=\"https:\/\/80000hours.org\/career-reviews\/ai-policy-and-strategy\/?source=email&amp;uni_id=*|UNI_ID|*&amp;email=*|EMAIL|*\">AI governance<\/a> or <a href=\"https:\/\/80000hours.org\/career-reviews\/ai-safety-researcher\/?source=email&amp;uni_id=*|UNI_ID|*&amp;email=*|EMAIL|*\">AI safety<\/a>  <\/li>\n<li>Consider other potential <a href=\"https:\/\/www.cold-takes.com\/jobs-that-can-help-with-the-most-important-century\/\">jobs that can help<\/a><\/li>\n<\/ul>\n<p>We also recommend checking out recent posts from our founder Benjamin Todd on:<\/p>\n<ul>\n<li><a href=\"https:\/\/benjamintodd.substack.com\/p\/looks-like-there-are-some-good-funding\">Funding opportunities<\/a> in AI safety  <\/li>\n<li>How <a href=\"https:\/\/benjamintodd.substack.com\/p\/how-can-an-ordinary-person-prepare\">ordinary people can prepare<\/a> for the possibility of transformative AI in the near future<\/li>\n<\/ul>\n<p>We plan to continue to cover this topic in the coming year, and we wouldn&#8217;t be surprised to see many additional changes and major AI developments. Continue following along with us, and consider sharing this information with your friends by forwarding this email if you find it helpful.<\/p>\n<div class=\"well bg-gray-lighter margin-bottom margin-top padding-top-small padding-bottom-small\">\n<p>This blog post was first released to our newsletter subscribers.<\/p>\n<p><strong>Join over 500,000 newsletter subscribers<\/strong> who get content like this in their inboxes <span class=\"ab-90-var hidden\">regularly<\/span><span class=\"ab-90-og\">weekly<\/span> \u2014 and we&#8217;ll also mail you a free book!<\/p>\n<form data-80k-object-id=\"\" data-80k-form-action=\"newsletter__subscribe\" action=\"\/\" method=\"post\" class=\"form-newsletter-signup form-newsletter-signup-step-1 margin-bottom-smaller\">\n<div class=\"mc-field-group input-group compact-input-group \"> <input type=\"email\" value=\"\" name=\"email\" required class=\"form-control email\" placeholder=\"Email address\" id=\"input_email\" pattern=\"[a-zA-Z0-9._+\\-]+@(?!(gmial\\.com|gnail\\.com|gmai\\.com|gmal\\.com|gmali\\.com|gamil\\.com|gmail\\.co|gmail\\.con|gmail\\.om|yahooo\\.com|yaho\\.com|outlok\\.com|outloo\\.com|hotmial\\.com|hotmail\\.con|hmail\\.com|yopmail\\.com|discardmail\\.com)$)[a-zA-Z0-9.\\-]+\\.[a-zA-Z]{2,}\" title=\"Please enter a valid email address (e.g. user@example.com)\" > <span class=\"submit input-group-btn input-group-btn-right\"> <input type=\"submit\" id=\"mc-embedded-subscribe\" value=\"GET RESEARCH UPDATES\" class=\"btn btn-primary \" \/> <\/span> <\/div>\n<div> <input name=\"_eightyk_action\" value=\"mailchimp_add_subscriber\" type=\"hidden\"> <input name=\"redirect_path_after_step_2\" value=\"\/newsletter\/welcome\/\" type=\"hidden\"> <\/div>\n<div style=\"position: absolute; left: -5000px;\"> <input type=\"text\" name=\"b_abc12f58bbe8075560abdc5b7_43bc1ae55c\" tabindex=\"-1\" value=\"\"> <\/div>\n<\/form>\n<\/div>\n<p><strong>Learn more<\/strong>:<\/p>\n<ul>\n<li><a href=\"https:\/\/epoch.ai\/blog\/can-ai-scaling-continue-through-2030\">Can AI Scaling Continue Through 2030?<\/a> from Epoch AI  <\/li>\n<li><a href=\"https:\/\/www.oneusefulthing.org\/p\/what-just-happened\">What just happened: A transformative month rewrites the capabilities of AI<\/a> by Ethan Mollick  <\/li>\n<li>An update from <a href=\"https:\/\/importai.substack.com\/p\/import-ai-395-ai-and-energy-demand\">Jack Clark&#8217;s &#8220;Import AI&#8221; newsletter<\/a>: &#8220;OpenAI&#8217;s O3 means AI progress in 2025 will be faster than in 2024.&#8221;  <\/li>\n<li><a href=\"https:\/\/www.transformernews.ai\/p\/synthetic-data-model-collapse-fears\">Synthetic data is more useful than you think<\/a> by Lynnette Bye  <\/li>\n<li><a href=\"https:\/\/www.aisnakeoil.com\/p\/is-ai-progress-slowing-down\">Is AI progress slowing down? Making sense of recent technology trends and claims<\/a> by Arvind Narayanan and Sayash Kapoor  <\/li>\n<li><a href=\"https:\/\/simonwillison.net\/2024\/Dec\/31\/llms-in-2024\/\">Things we learned about LLMs in 2024<\/a> by Simon Willison  <\/li>\n<li><a href=\"https:\/\/www.youtube.com\/watch?v=vnl9Xf3wwU0\">How A.I. Could Change Science Forever<\/a> \u2014 a video by Cool Worlds<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":435,"featured_media":88509,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":"[fn frontier] After this was published, reporting indicated that [OpenAI funded the development of this benchmark](https:\/\/techcrunch.com\/2025\/01\/19\/ai-benchmarking-organization-criticized-for-waiting-to-disclose-funding-from-openai\/) without initially acknowledging it, raising some doubts about its credibility. So there's some room for skepticism about these results. \r\n\r\nHowever, since o3 has reportedly showed significant improvements on a range of these impressive benchmarks, we're still confident that it remains a significant advance over the previous o1 model, which has itself been shown to be a major advance over the models that came before.[\/fn]\r\n"},"categories":[1353,1315,1425,1369,1385,1362],"class_list":["post-88504","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","category-career-planning","category-global-priorities-research-career-paths","category-machine-learning-skill-building-and-career-capital","category-research","category-skills-skill-building-and-career-capital-2"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>What happened with AI in 2024? | 80,000 Hours<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/80000hours.org\/2025\/01\/what-happened-with-ai-2024\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"What happened with AI in 2024? | 80,000 Hours\" \/>\n<meta property=\"og:description\" content=\"The idea this week: despite claims of stagnation, AI research still advanced rapidly in 2024&amp;#46; Some people say AI research has plateaued. But a lot of evidence from the last year points in the opposite direction: New capabilities were developed and emerged Research indicates existing AI can accelerate science And at the same time, important findings about AI safety and risk came out (see below). AI advances might still stall. Some leaders in the field have warned that a lack of good data, for example, may impede further capability growth, though others disagree.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/80000hours.org\/2025\/01\/what-happened-with-ai-2024\/\" \/>\n<meta property=\"og:site_name\" content=\"80,000 Hours\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/80000Hours\" \/>\n<meta property=\"article:author\" content=\"https:\/\/www.facebook.com\/cody.fenwick\" \/>\n<meta property=\"article:published_time\" content=\"2025-01-03T10:32:49+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-10-26T11:39:58+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/01\/robot-field.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1456\" \/>\n\t<meta property=\"og:image:height\" content=\"832\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Cody Fenwick\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@codytfenwick\" \/>\n<meta name=\"twitter:site\" content=\"@80000hours\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Cody Fenwick\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/01\\\/what-happened-with-ai-2024\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/01\\\/what-happened-with-ai-2024\\\/\"},\"author\":{\"name\":\"Cody Fenwick\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/person\\\/75ac20bab88e70f659caa92bc64fd2cc\"},\"headline\":\"What happened with AI in&nbsp;2024?\",\"datePublished\":\"2025-01-03T10:32:49+00:00\",\"dateModified\":\"2025-10-26T11:39:58+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/01\\\/what-happened-with-ai-2024\\\/\"},\"wordCount\":1153,\"publisher\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/01\\\/what-happened-with-ai-2024\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2025\\\/01\\\/robot-field.png\",\"articleSection\":[\"AI\",\"Career planning\",\"Global priorities research\",\"Machine learning\",\"Research\",\"Skills\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/01\\\/what-happened-with-ai-2024\\\/\",\"url\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/01\\\/what-happened-with-ai-2024\\\/\",\"name\":\"What happened with AI in 2024? | 80,000 Hours\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/01\\\/what-happened-with-ai-2024\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/01\\\/what-happened-with-ai-2024\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2025\\\/01\\\/robot-field.png\",\"datePublished\":\"2025-01-03T10:32:49+00:00\",\"dateModified\":\"2025-10-26T11:39:58+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/01\\\/what-happened-with-ai-2024\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/80000hours.org\\\/2025\\\/01\\\/what-happened-with-ai-2024\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/01\\\/what-happened-with-ai-2024\\\/#primaryimage\",\"url\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2025\\\/01\\\/robot-field.png\",\"contentUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2025\\\/01\\\/robot-field.png\",\"width\":1456,\"height\":832},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/2025\\\/01\\\/what-happened-with-ai-2024\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/80000hours.org\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"What happened with AI in&nbsp;2024?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#website\",\"url\":\"https:\\\/\\\/80000hours.org\\\/\",\"name\":\"80,000 Hours\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/80000hours.org\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\",\"name\":\"80,000 Hours\",\"url\":\"https:\\\/\\\/80000hours.org\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2018\\\/07\\\/og-logo_0.png\",\"contentUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2018\\\/07\\\/og-logo_0.png\",\"width\":1500,\"height\":785,\"caption\":\"80,000 Hours\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/80000Hours\",\"https:\\\/\\\/x.com\\\/80000hours\",\"https:\\\/\\\/www.youtube.com\\\/user\\\/eightythousandhours\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/person\\\/75ac20bab88e70f659caa92bc64fd2cc\",\"name\":\"Cody Fenwick\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/b15399901921a0324ca7f860d98f72339b991688812084f26d8dd50d5ec79aa5?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/b15399901921a0324ca7f860d98f72339b991688812084f26d8dd50d5ec79aa5?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/b15399901921a0324ca7f860d98f72339b991688812084f26d8dd50d5ec79aa5?s=96&d=mm&r=g\",\"caption\":\"Cody Fenwick\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/cody.fenwick\",\"https:\\\/\\\/www.linkedin.com\\\/in\\\/cody-fenwick-8073089b\\\/\",\"https:\\\/\\\/x.com\\\/codytfenwick\"],\"url\":\"https:\\\/\\\/80000hours.org\\\/author\\\/cody-fenwick\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"What happened with AI in 2024? | 80,000 Hours","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/80000hours.org\/2025\/01\/what-happened-with-ai-2024\/","og_locale":"en_US","og_type":"article","og_title":"What happened with AI in 2024? | 80,000 Hours","og_description":"The idea this week: despite claims of stagnation, AI research still advanced rapidly in 2024&amp;#46; Some people say AI research has plateaued. But a lot of evidence from the last year points in the opposite direction: New capabilities were developed and emerged Research indicates existing AI can accelerate science And at the same time, important findings about AI safety and risk came out (see below). AI advances might still stall. Some leaders in the field have warned that a lack of good data, for example, may impede further capability growth, though others disagree.","og_url":"https:\/\/80000hours.org\/2025\/01\/what-happened-with-ai-2024\/","og_site_name":"80,000 Hours","article_publisher":"https:\/\/www.facebook.com\/80000Hours","article_author":"https:\/\/www.facebook.com\/cody.fenwick","article_published_time":"2025-01-03T10:32:49+00:00","article_modified_time":"2025-10-26T11:39:58+00:00","og_image":[{"width":1456,"height":832,"url":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/01\/robot-field.png","type":"image\/png"}],"author":"Cody Fenwick","twitter_card":"summary_large_image","twitter_creator":"@codytfenwick","twitter_site":"@80000hours","twitter_misc":{"Written by":"Cody Fenwick","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/80000hours.org\/2025\/01\/what-happened-with-ai-2024\/#article","isPartOf":{"@id":"https:\/\/80000hours.org\/2025\/01\/what-happened-with-ai-2024\/"},"author":{"name":"Cody Fenwick","@id":"https:\/\/80000hours.org\/#\/schema\/person\/75ac20bab88e70f659caa92bc64fd2cc"},"headline":"What happened with AI in&nbsp;2024?","datePublished":"2025-01-03T10:32:49+00:00","dateModified":"2025-10-26T11:39:58+00:00","mainEntityOfPage":{"@id":"https:\/\/80000hours.org\/2025\/01\/what-happened-with-ai-2024\/"},"wordCount":1153,"publisher":{"@id":"https:\/\/80000hours.org\/#organization"},"image":{"@id":"https:\/\/80000hours.org\/2025\/01\/what-happened-with-ai-2024\/#primaryimage"},"thumbnailUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/01\/robot-field.png","articleSection":["AI","Career planning","Global priorities research","Machine learning","Research","Skills"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/80000hours.org\/2025\/01\/what-happened-with-ai-2024\/","url":"https:\/\/80000hours.org\/2025\/01\/what-happened-with-ai-2024\/","name":"What happened with AI in 2024? | 80,000 Hours","isPartOf":{"@id":"https:\/\/80000hours.org\/#website"},"primaryImageOfPage":{"@id":"https:\/\/80000hours.org\/2025\/01\/what-happened-with-ai-2024\/#primaryimage"},"image":{"@id":"https:\/\/80000hours.org\/2025\/01\/what-happened-with-ai-2024\/#primaryimage"},"thumbnailUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/01\/robot-field.png","datePublished":"2025-01-03T10:32:49+00:00","dateModified":"2025-10-26T11:39:58+00:00","breadcrumb":{"@id":"https:\/\/80000hours.org\/2025\/01\/what-happened-with-ai-2024\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/80000hours.org\/2025\/01\/what-happened-with-ai-2024\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/2025\/01\/what-happened-with-ai-2024\/#primaryimage","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/01\/robot-field.png","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/01\/robot-field.png","width":1456,"height":832},{"@type":"BreadcrumbList","@id":"https:\/\/80000hours.org\/2025\/01\/what-happened-with-ai-2024\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/80000hours.org\/"},{"@type":"ListItem","position":2,"name":"What happened with AI in&nbsp;2024?"}]},{"@type":"WebSite","@id":"https:\/\/80000hours.org\/#website","url":"https:\/\/80000hours.org\/","name":"80,000 Hours","description":"","publisher":{"@id":"https:\/\/80000hours.org\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/80000hours.org\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/80000hours.org\/#organization","name":"80,000 Hours","url":"https:\/\/80000hours.org\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","width":1500,"height":785,"caption":"80,000 Hours"},"image":{"@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/80000Hours","https:\/\/x.com\/80000hours","https:\/\/www.youtube.com\/user\/eightythousandhours"]},{"@type":"Person","@id":"https:\/\/80000hours.org\/#\/schema\/person\/75ac20bab88e70f659caa92bc64fd2cc","name":"Cody Fenwick","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/b15399901921a0324ca7f860d98f72339b991688812084f26d8dd50d5ec79aa5?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/b15399901921a0324ca7f860d98f72339b991688812084f26d8dd50d5ec79aa5?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/b15399901921a0324ca7f860d98f72339b991688812084f26d8dd50d5ec79aa5?s=96&d=mm&r=g","caption":"Cody Fenwick"},"sameAs":["https:\/\/www.facebook.com\/cody.fenwick","https:\/\/www.linkedin.com\/in\/cody-fenwick-8073089b\/","https:\/\/x.com\/codytfenwick"],"url":"https:\/\/80000hours.org\/author\/cody-fenwick\/"}]}},"_links":{"self":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/posts\/88504","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/users\/435"}],"replies":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/comments?post=88504"}],"version-history":[{"count":0,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/posts\/88504\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media\/88509"}],"wp:attachment":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media?parent=88504"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/categories?post=88504"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}