{"id":92247,"date":"2025-09-17T17:17:41","date_gmt":"2025-09-17T17:17:41","guid":{"rendered":"https:\/\/80000hours.org\/?post_type=problem_profile&#038;p=92247"},"modified":"2026-04-21T13:03:08","modified_gmt":"2026-04-21T13:03:08","slug":"ai-enhanced-decision-making","status":"publish","type":"problem_profile","link":"https:\/\/80000hours.org\/problem-profiles\/ai-enhanced-decision-making\/","title":{"rendered":"Using AI to enhance societal decision&nbsp;making"},"content":{"rendered":"<div id=\"toc_container\" class=\"toc_white no_bullets\"><p class=\"toc_title\">Table of Contents<\/p><ul class=\"toc_list\"><li><a href=\"#why-advancing-ai-decision-making-tools-might-matter-a-lot\"><span class=\"toc_number toc_depth_1\">1<\/span> Why advancing AI decision-making tools might matter a lot<\/a><ul><li><a href=\"#ai-tools-could-help-us-make-much-better-decisions\"><span class=\"toc_number toc_depth_2\">1.1<\/span> AI tools could help us make much better decisions<\/a><\/li><li><a href=\"#we-might-be-able-to-differentially-speed-up-the-rollout-of-ai-decision-making-tools\"><span class=\"toc_number toc_depth_2\">1.2<\/span> We might be able to differentially speed up the rollout of AI decision-making tools<\/a><\/li><\/ul><\/li><li><a href=\"#objections-and-responses\"><span class=\"toc_number toc_depth_1\">2<\/span> What are the arguments against working to advance AI decision-making tools?<\/a><ul><li><a href=\"#so-should-you-work-on-this\"><span class=\"toc_number toc_depth_2\">2.1<\/span> So should you work on this?<\/a><\/li><\/ul><\/li><li><a href=\"#how-to-work-in-this-area\"><span class=\"toc_number toc_depth_1\">3<\/span> How to work in this area<\/a><ul><li><a href=\"#help-build-ai-decision-making-tools\"><span class=\"toc_number toc_depth_2\">3.1<\/span> Help build AI decision-making tools<\/a><\/li><li><a href=\"#complementary-work\"><span class=\"toc_number toc_depth_2\">3.2<\/span> Complementary work<\/a><\/li><li><a href=\"#position-yourself-to-help-in-future\"><span class=\"toc_number toc_depth_2\">3.3<\/span> Position yourself to help in future<\/a><\/li><li><a href=\"#what-opportunities-are-there\"><span class=\"toc_number toc_depth_2\">3.4<\/span> What opportunities are there?<\/a><\/li><\/ul><\/li><li><a href=\"#learn-more\"><span class=\"toc_number toc_depth_1\">4<\/span> Learn more<\/a><\/li><li><a href=\"#acknowledgements\"><span class=\"toc_number toc_depth_1\">5<\/span> Acknowledgements<\/a><\/li><\/ul><\/div>\n<h2><span id=\"why-advancing-ai-decision-making-tools-might-matter-a-lot\" class=\"toc-anchor\"><\/span>Why advancing AI decision-making tools might matter a lot<\/h2>\n<p>Humans often make big mistakes.<\/p>\n<p>Our institutions ignored climate scientists for decades, responded ineffectively to early COVID-19 warnings, and have rushed into countless wars that all parties later regretted. It&#8217;s striking how far our actual decisions sometimes fall short of what, in hindsight, looks obviously necessary.<\/p>\n<p>Why does this keep happening? Sometimes we misunderstand the facts or fail to predict challenges ahead of us. Other times, we know there&#8217;s a problem, but we don&#8217;t take sufficient action or coordinate on a response.<\/p>\n<p>We&#8217;re now rapidly developing advanced AI systems that could transform every aspect of society, making good decision making even more critical. Soon, we could be dealing with:<\/p>\n<ul>\n<li>A whole new population of extremely capable agents, potentially with different goals and interests to humans <\/li>\n<li>A totally reshaped labour market where AI systems, rather than humans, drive much or all economic progress<\/li>\n<li>AIs developing new advanced technologies \u2014 <a href=\"https:\/\/80000hours.org\/problem-profiles\/catastrophic-ai-misuse\/\">including weapons<\/a> \u2014 faster than we can study their risks <\/li>\n<li>Societal and geopolitical tensions over who controls or receives the benefits of advanced AI, possibly escalating into conflict<\/li>\n<\/ul>\n<p>Advanced AI systems could also produce ideas and economic outputs much faster than humans, potentially <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/will-macaskill-century-in-a-decade-navigating-intelligence-explosion\/\">compressing a century&#8217;s worth of progress into a decade<\/a> \u2014 which means decisions that once played out over years might need to be made in a matter of months.<\/p>\n<p>So the chance of missteps is high. And as we&#8217;ve argued elsewhere, <a href=\"https:\/\/80000hours.org\/2025\/04\/work-on-ai-risks\/\">the stakes could be existential<\/a>.<\/p>\n<p>If we want to navigate this period well, we&#8217;ll need to think more clearly, act more wisely, and coordinate more effectively than before. And that&#8217;s a tall order.<\/p>\n<h3><span id=\"ai-tools-could-help-us-make-much-better-decisions\" class=\"toc-anchor\"><\/span>AI tools could help us make much better decisions<\/h3>\n<p>The development of advanced AI could both make decision making more challenging and raise the stakes of humanity&#8217;s future decisions. But AI is not a monolith, and \u2014 perhaps counterintuitively \u2014 we think certain AI tools could actually be part of the solution.<\/p>\n<p>AI systems are capable of things humans simply aren&#8217;t \u2014 they can absorb far more information, process it at vastly higher speeds, and improve their performance by practicing the same task millions of times.<\/p>\n<p>They&#8217;ve already beaten the best humans at strategy games like Go, and they&#8217;re now also performing impressively on complex reasoning and problem-solving tasks. And if you&#8217;ve ever used &#8220;deep research&#8221; tools from AI companies <a href=\"https:\/\/openai.com\/index\/introducing-deep-research\/\">like OpenAI<\/a> and <a href=\"https:\/\/gemini.google\/overview\/deep-research\/\">Google DeepMind<\/a>, you know current models can process huge amounts of information into coherent conclusions much faster than even the greatest human minds.<\/p>\n<p>Given this, we think we&#8217;re within reach of having AI tools that can seriously improve human decision making \u2014 some may even be buildable with today&#8217;s technology.<\/p>\n<p>Two kinds seem especially promising:<\/p>\n<ul>\n<li><strong>Epistemic tools<\/strong>, which help us understand what&#8217;s true and what&#8217;s likely to happen. For example:\n<ul>\n<li>AI fact checkers may be more reliable and impartial evaluators of information than humans are. Society currently struggles to converge on matters of fact \u2014 consider how often political disagreements come down to a dispute over facts, or how easily misinformation spreads online. We&#8217;ll need to get a lot better at this if we have to navigate <a href=\"https:\/\/www.forethought.org\/research\/preparing-for-the-intelligence-explosion#epistemic-disruption\">epistemic disruption<\/a> from advanced AI. <\/li>\n<li>AI forecasting systems could help institutions make better predictions about world events and model the effects of different policies.<\/li>\n<li>More speculatively, AI tools for moral progress could help us reason through complex ethical questions and potentially come to more agreement as a society.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Coordination tools<\/strong>, which help groups work together and make better collective decisions, even if they have competing interests. For example:\n<ul>\n<li>AI negotiation tools could find mutually beneficial agreements that might otherwise be missed \u2014 perhaps by rapidly simulating thousands of hours of negotiation and testing out a vast number of agreements before making a proposal. <\/li>\n<li>AI-enabled verification systems could reliably and impartially monitor compliance with agreements, overcoming the trust barriers that often prevent groups from cooperating.<\/li>\n<li><strong><a href=\"https:\/\/aiprospects.substack.com\/p\/security-without-dystopia-new-options\">Structured transparency tools<\/a><\/strong> could enable tightly controlled information sharing, allowing parties to detect specific threats from each other \u2014 like whether someone is building dangerous weapons \u2014 without the broader privacy costs of ordinary surveillance. <\/li>\n<li>There&#8217;s ongoing research in <a href=\"https:\/\/arxiv.org\/pdf\/2012.08630\">the field of &#8220;Cooperative AI&#8221;<\/a> exploring more ways to use AI for improved coordination. <\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>We think these applications target some of the most common failures of human decision making. We often get led astray by false information, incorrectly predict how things will unfold, or fail to prevent outcomes no one wanted because we can&#8217;t cooperate.<\/p>\n<p>Another virtue of these applications is that they seem to be more useful for enabling good outcomes than bad ones, overall. As a general rule of thumb, it seems empowering people to better understand the world and coordinate with each other is <em>usually<\/em> good for humanity \u2014 at least under the assumption that people are usually well intentioned.<\/p>\n<p>Of course, this assumption doesn&#8217;t always hold. We do think there&#8217;s <em>some<\/em> risk of people deliberately using even these AI tools to cause harm \u2014 a possibility we address <a href=\"#objections-and-responses\">later on<\/a>.<\/p>\n<h3><span id=\"we-might-be-able-to-differentially-speed-up-the-rollout-of-ai-decision-making-tools\" class=\"toc-anchor\"><\/span>We might be able to differentially speed up the rollout of AI decision-making tools<\/h3>\n<p>Right now, only a handful of projects are building the kinds of AI tools we described above \u2014 a drop in the ocean compared to the billions invested in developing more broadly capable AI agents.<\/p>\n<p>Plus, there&#8217;s often a lag between society having the <em>ability<\/em> to build a product and it <em>actually<\/em> being built and successfully rolled out. Consider COVID-19 vaccines: although the underlying mRNA technology for these vaccines <a href=\"https:\/\/www.mayoclinic.org\/diseases-conditions\/history-disease-outbreaks-vaccine-timeline\/covid-19\">was proven in the mid-2000s<\/a>, they didn&#8217;t actually arrive until late 2020 \u2014 almost a year into the pandemic.<\/p>\n<p>This points to an opportunity: we might be able to accelerate the development and adoption of AI decision-making tools, which would mean <em>getting their benefits faster<\/em>.  And even a small speedup could be consequential \u2014 for example, getting sophisticated verification tools just a few months earlier could mean critical safety commitments get nailed down <em>before<\/em> we develop dangerous AI systems, instead of arriving too late to make a difference.<\/p>\n<p>What we&#8217;re pointing to here is one form of <a href=\"https:\/\/en.wikipedia.org\/wiki\/Differential_technological_development\">&#8220;differential technology development&#8221;<\/a>: influencing the order in which different technologies emerge in order to make the world safer. In this case, the idea is to speed up the development of certain safety-promoting AI capabilities so they&#8217;re available <em>before<\/em> we have to contend with other, riskier AI capabilities.<\/p>\n<p>Because we&#8217;ve seen so few projects in this direction so far, there&#8217;s still lots of low-hanging fruit to pick. <a href=\"#how-to-work-in-this-area\">Later on<\/a>, we describe some work we think could be useful.<\/p>\n<h2><span id=\"objections-and-responses\" class=\"toc-anchor\"><\/span>What are the arguments against working to advance AI decision-making tools?<\/h2>\n<p>Having said all this, there are some objections we think people should really consider when deciding whether to work in this area.<\/p>\n<div class=\"panel-group\" id=\"custom-collapse-0\">\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-0\"> \"These technologies might be developed by default anyway.\" <\/a><\/h4>\n<\/p><\/div>\n<div id=\"-0\" class=\"panel-body-collapse collapse\" data-80k-event-label=\" \"These technologies might be developed by default anyway.\" \">\n<div class=\"panel-body\">\n<p>Huge AI companies are racing to develop models that excel at all kinds of complex reasoning \u2014 and they&#8217;re <a href=\"https:\/\/80000hours.org\/agi\/guide\/when-will-agi-arrive\/\">making rapid progress<\/a>. Meanwhile, there are growing market incentives to build AI products for specific, commercially valuable tasks, which might include some of the applications above.<\/p>\n<p>So AI decision-making tools might get developed anyway by people trying to make money \u2014 meaning it might not be a good use of time for people who want to do good with their careers. Why not just wait for this to happen, and do something else with your time?<\/p>\n<p>This does seem right to some extent. As a general rule, focusing on something that&#8217;s already commercially incentivised will probably reduce the <a href=\"https:\/\/80000hours.org\/articles\/counterfactuals\/\">counterfactual impact<\/a> of your work.<\/p>\n<p>But we think there are ways you could still make a meaningful difference here \u2014 especially if you focus on gaps in the market when deciding what project to pursue.<\/p>\n<p>First, <strong>your work could still help society achieve the benefits of these tools sooner than they would otherwise have arrived<\/strong>.<\/p>\n<p>You might speed things up directly (for example, by successfully building a specific tool before anyone else gets there). And if your work does get overtaken by another project, it could still have compounding effects that speed up the arrival of future tools. For instance, if it attracts more investment or builds relevant knowledge, your project could enable others to achieve a certain milestone faster \u2014 which could in turn bring forward the <em>next<\/em> milestone, and so on.<\/p>\n<p>And as we&#8217;ve said, even a small speedup could make a big difference here.<\/p>\n<p>Importantly, although we think frontier models will eventually excel at tasks in epistemics and coordination, simply waiting for good decision-making tools to get rolled out could mean getting them once AGI has already arrived. By then, it might be too late to use them to avoid a catastrophe.<\/p>\n<p>Second, <strong>you might be able to focus on products that are less incentivised by the market<\/strong>.<\/p>\n<p>For example, while advanced AI forecasting tools might get built by default for profitable uses like financial trading, there&#8217;s much less commercial pressure to develop AI systems that are good at predicting other things, or to create sophisticated tools for reasoning about ethics. <\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-1\">&quot;Wouldn't this make dangerous AI capabilities arrive faster, when we should be slowing things down?&quot;<\/a><\/h4>\n<\/p><\/div>\n<div id=\"-1\" class=\"panel-body-collapse collapse\" data-80k-event-label=\"&quot;Wouldn't this make dangerous AI capabilities arrive faster, when we should be slowing things down?&quot;\">\n<div class=\"panel-body\">\n<p>By accelerating progress on these tools, you might also increase knowledge, hype, and investment into AI R&#038;D more broadly. This could bring about AGI sooner, giving us less time to prepare.<\/p>\n<p>Your work could also enhance certain dangerous capabilities. For example, we think AI systems that excel at planning <a href=\"https:\/\/80000hours.org\/problem-profiles\/risks-from-power-seeking-ai\/\">pose risks of disempowering humans<\/a> \u2014 and developing systems that are great at forecasting might dangerously boost AI planning capabilities.<\/p>\n<p>We&#8217;ve <a href=\"https:\/\/80000hours.org\/articles\/ai-capabilities\/\">explored these concerns elsewhere<\/a>, and there&#8217;s a lot to say on the subject. But in this context, it&#8217;s worth bearing in mind:<\/p>\n<ul>\n<li>Although projects in this area might contribute to AI hype to some degree, these effects will probably be very insignificant compared to the billions of dollars <em>already<\/em> being invested into building AGI. By contrast, you could have an outsized impact on humanity&#8217;s ability to make wise decisions. <\/li>\n<li>You might be able to (and should probably try to) target lower-risk applications that don&#8217;t directly feed the development of dangerous capabilities. For example, AI fact-checking tools seem much safer to build than tools leveraging strategic planning or persuasion.<\/li>\n<li>If these tools seriously improve our ability to navigate the world&#8217;s biggest challenges, some speedup in the arrival of dangerous AI capabilities could still be worth it overall. <\/li>\n<\/ul>\n<p>There is also a role to play for interventions that <em>slow down<\/em> progress on dangerous technologies \u2014 whether that&#8217;s through regulations that allow companies to take their time on safety without bearing the costs of unilateral slowdowns, or perhaps even campaigning to pause frontier AI development altogether. But <em>speeding up<\/em> progress on safety-promoting technologies can happen at the same time. It might also be easier: while slowing down requires agreement from officials or companies, you can just decide to develop a new tool without consensus. And you&#8217;ll likely face less pushback, since your strategy won&#8217;t mean forgoing or delaying the benefits of future AI (or threatening powerful companies&#8217; bottom lines).<\/p>\n<\/div><\/div><\/div>\n<div class=\"panel panel-default panel-collapse\">\n<div class=\"panel-heading\">\n<h4 class=\"panel-title\"><a class=\"no-visited-styling collapsed\" data-toggle=\"collapse\" data-target=\"#-2\"> \"People might use these tools in dangerous ways.\" <\/a><\/h4>\n<\/p><\/div>\n<div id=\"-2\" class=\"panel-body-collapse collapse\" data-80k-event-label=\" \"People might use these tools in dangerous ways.\" \">\n<div class=\"panel-body\">\n<p>Like many technologies, AI tools for epistemics and coordination could be used to cause harm.<\/p>\n<p>After all, getting better at understanding the world and coordinating with others typically makes you <em>better at achieving your goals<\/em>. And since people sometimes have goals that are harmful to others, these tools will sometimes help people <em>do bad things more effectively<\/em>.<\/p>\n<p>For example, groups with access to tools that enhance their negotiation or forecasting abilities could use them to illegitimately gain strategic advantages over those who don&#8217;t have such tools. In extreme cases, this could potentially even enable a dangerous <a href=\"https:\/\/80000hours.org\/problem-profiles\/ai-enabled-power-grabs\/\">power grab<\/a>.<\/p>\n<p>We&#8217;d guess that actors with genuinely malicious intentions are just not that common. Broadly speaking, it seems most harmful decisions don&#8217;t happen because people really <em>want<\/em> to cause harm, but because we misunderstand a situation, don&#8217;t realise the consequences our actions could have, or fail to find a solution that&#8217;s less costly for everyone involved \u2014 defects AI decision-making tools would help us overcome.<\/p>\n<p>And as we said earlier: a general, commonsense rule of thumb here is that empowering humans to understand the world and coordinate better seems to <em>usually<\/em> be a good thing for humanity.<\/p>\n<p>So our guess is that overall, AI decision-making tools will help us <em>prevent<\/em> bad outcomes more often than they&#8217;ll enable them. This is one of the key reasons we&#8217;re broadly enthusiastic about these tools.<\/p>\n<p>But this is a generalisation, and won&#8217;t hold true for <em>every<\/em> AI decision-making tool you could create. So if you&#8217;re deciding whether to build or promote a new tool, you <strong>should<\/strong> factor in its specific misuse risks \u2014 and whether it might actually favour harmful uses over beneficial ones. And these are difficult questions, so <a href=\"https:\/\/80000hours.org\/articles\/accidental-harm\/#3-have-a-degree-of-humility\">you should get help<\/a> when trying to answer them.<\/p>\n<p>The most extreme risks here \u2014 like the chance of enabling a power grab \u2014 also highlight the importance of getting AI decision-making tools in enough hands. By default, the most powerful actors will have access to better technologies than everyone else. But if we can make decision-making tools widely accessible and equip key institutions to use them, we could prevent any single group from gaining dangerous advantages over others.<\/p>\n<p>We may need dedicated effort to make this happen \u2014 so if you decide to work on this, we encourage you to put in that dedicated effort.<\/p>\n<\/div><\/div><\/div>\n<\/div>\n<h3><span id=\"so-should-you-work-on-this\" class=\"toc-anchor\"><\/span>So should you work on this?<\/h3>\n<p>Bottom line: it&#8217;s complicated, but If you&#8217;re a good fit, working on this could have a lot of upside. We&#8217;d encourage anyone who&#8217;s interested to investigate whether it might be a good fit for them \u2014 for example, apply to <a href=\"\/speak-with-us\/\">speak with one of our advisors<\/a>.<\/p>\n<p>For the reasons above, it does seem that some work in this space will end up having very little impact \u2014 and some could even have <em>negative<\/em> effects.<\/p>\n<p>You&#8217;re more likely to avoid the pitfalls if you can prioritise AI decision-making projects that are:<\/p>\n<ul>\n<li>Underincentivised by the market <\/li>\n<li>Less likely to drive the development of other, dangerous AI capabilities<\/li>\n<li>More useful for beneficial purposes than for harmful ones, or more robust to misuse<\/li>\n<\/ul>\n<p>But deciding <em>what projects<\/em> to pursue on this basis is much easier said than done. And because there aren&#8217;t many concrete job opportunities here, working in this area may also require a more entrepreneurial approach than you&#8217;d need for tackling many other pressing problems.<\/p>\n<p>So overall, we don&#8217;t think we can recommend this work as <em>widely<\/em> as we recommend working in more mature areas where the paths to impact are better tested and more clearly mapped out.<\/p>\n<p>Still, we think efforts to advance AI decision-making tools could be very impactful <em>for the right person<\/em>. If you&#8217;re especially good at navigating ambiguity, have an entrepreneurial mindset, and have strong judgement about what projects to prioritise, this could be a great fit. At this stage, we&#8217;d be excited to see perhaps <strong>a few hundred more people<\/strong> working in this area.<\/p>\n<p>If you&#8217;re interested in being one of those people, we recommend <a href=\"https:\/\/80000hours.org\/community\/\">building a network in AI safety<\/a> and finding people who can <a href=\"https:\/\/80000hours.org\/articles\/accidental-harm\/#3-have-a-degree-of-humility\">help you think through specific project ideas<\/a> first.<\/p>\n<p>It&#8217;s also worth noting that some researchers \u2014 like the authors of <a href=\"https:\/\/www.forethought.org\/research\/ai-tools-for-existential-security\">this article from Forethought<\/a> \u2014 feel more optimistic than we do about having many more people working in this area. So it&#8217;s possible we&#8217;re underrating it!<\/p>\n<p>In any case, we also recommend <a href=\"https:\/\/80000hours.org\/agi\/#stay-up-to-date-section\">keeping up to date<\/a> with the evolving landscape of AI challenges, and being ready to pivot if other needs become more pressing.<\/p>\n<h2><span id=\"how-to-work-in-this-area\" class=\"toc-anchor\"><\/span>How to work in this area<\/h2>\n<p>Here are the top recommendations we&#8217;ve seen for people who want to speed up the development and adoption of AI decision-making tools.<\/p>\n<h3><span id=\"help-build-ai-decision-making-tools\" class=\"toc-anchor\"><\/span>Help build AI decision-making tools<\/h3>\n<p>The most direct thing you can do is working somewhere that&#8217;s building the tools themselves. You can find some relevant organisations and research projects that are hiring on our job board below. But since the field is currently small, you might consider <a href=\"https:\/\/80000hours.org\/career-reviews\/founder-impactful-organisations\/\">founding your own project<\/a> instead.<\/p>\n<p>Either way, there&#8217;s lots to do here \u2014 not just the core engineering work, but also making demos, getting stakeholders on board, designing user interfaces that are appealing to decision makers, doing market research to tailor products to user needs, and ensuring projects operate efficiently.<\/p>\n<p>This means <strong>you don&#8217;t need to be a technical expert<\/strong> to join or found projects of this kind: they also need great operations staff, product managers, and more.<\/p>\n<aside class=\"well well-person pull-right clearfix align-center  padding-top-small padding-bottom-small\">\n<p class=\"no-margin-bottom\"><img decoding=\"async\" class=\"img-circle well-person__portrait\" height=300 width=300 src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/11\/Image-26-11-2025-at-11.49.jpeg\" alt=\"Jungwon Byun portrait\"><\/p>\n<h4 class=\"no-margin-top\">Jungwon Byun<\/h4>\n<p>Jungwon spent years working in financial services before she became interested in the transformative potential \u2014 and risks \u2014 of advanced AI. She realised her highest impact path might lie in using AI to enhance human knowledge and decision making, and in 2019, co-founded <a href=\"https:\/\/ought.org\/\">Ought<\/a>: a research lab dedicated to scaling up good reasoning. In the same year, she came across the 80,000 Hours website and <a href=\"https:\/\/80000hours.org\/podcast\/\">podcast<\/a>, which helped her quickly ramp up her understanding of AI safety and existential risk.<\/p>\n<p>Jungwon is now the COO of <a href=\"https:\/\/elicit.com\/\">Elicit<\/a>, an AI research tool launched by Ought that later spun out as a public benefit corporation. She has helped millions of researchers understand the world and reason more effectively.<\/p>\n<\/aside>\n<h3><span id=\"complementary-work\" class=\"toc-anchor\"><\/span>Complementary work<\/h3>\n<p>There are other ways you can support these efforts without getting directly involved in building the tools. For example, you could:<\/p>\n<ul>\n<li><strong>Measure and steer these beneficial capabilities<\/strong>:\n<ul>\n<li>Design benchmarks or evaluations for the AI capabilities that would most help decision making. <\/li>\n<\/ul>\n<\/li>\n<li><strong>Work on supporting tech and infrastructure<\/strong>:\n<ul>\n<li>Develop complementary technologies that help remove barriers to adoption \u2014 for example, by addressing users&#8217; privacy or security concerns. <\/li>\n<li>Curate and manage data sets that can be used to train specialised AI decision-making tools \u2014 for example, data about past mistakes in forecasting or negotiation, or high-quality research notes from fields where specialised decision-making tools could be very helpful. <\/li>\n<li>Create infrastructure \u2014 like online databases or directories \u2014 to help people share resources and collaborate on projects.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Help with implementation<\/strong>:\n<ul>\n<li>Help integrate these tools into existing decision-making processes at key institutions, including educating stakeholders on how to use them.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3><span id=\"position-yourself-to-help-in-future\" class=\"toc-anchor\"><\/span>Position yourself to help in future<\/h3>\n<p>If you&#8217;re not currently able to work on any of this \u2014 or just don&#8217;t feel it&#8217;s the best option right now \u2014 you can still position yourself to help in future by:<\/p>\n<ul>\n<li>Working at or founding any (non-harmful) company, especially a technology company, so that you can learn and practice the skills of founding projects \u2014 or see the <a href=\"https:\/\/80000hours.org\/career-reviews\/founder-impactful-organisations\/#next-steps-with-idea\">career capital steps<\/a> listed in our founder profile.<\/li>\n<li>Developing expertise in fields where these AI tools could be especially impactful, like <a href=\"https:\/\/80000hours.org\/career-reviews\/forecasting\/\">forecasting<\/a> or <a href=\"https:\/\/80000hours.org\/career-reviews\/diplomacy\/\">diplomacy<\/a>. <\/li>\n<li>Joining key institutions (like government agencies or international bodies) that might benefit a lot from AI decision-making tools \u2014 and staying current with the technologies while you&#8217;re there, so you can help integrate them later on.<\/li>\n<\/ul>\n<h3><span id=\"what-opportunities-are-there\" class=\"toc-anchor\"><\/span>What opportunities are there?<\/h3>\n<p>The field is currently small, but our job board features some relevant opportunities \u2014 including open positions, funding, and fellowships.<\/p>\n<p><script>\n    function getLocationString(arr) {\n      if (arr.length <= 3) { \n        return arr.join(\"<br \/>\");\n      }\n      return arr.slice(0, 3).join(\"<br \/>\") + \"...\";\n    }\n  <\/script><script>\n    function getUniqueCompanyJobs(jobs, limit) {\n      const uniqueCompanies = new Set();\n      const uniqueJobs = [];\n      const additionalJobs = [];\n      for (const job of jobs) {\n          const company = job.company_name;\n          if (!uniqueCompanies.has(company)) {\n              uniqueCompanies.add(company);\n              uniqueJobs.push(job);\n          } else {\n              additionalJobs.push(job);\n          }\n      }\n      return uniqueJobs.concat(additionalJobs).slice(0, limit);\n    }\n  <\/script><script>\n    window.addEventListener(\"load\", function() {\n        const container = document.querySelector(\"#vacancies-1\");\n        if (container) {\n          const searchClient = algoliasearch(\"W6KM1UDIB3\", \"d1d7f2c8696e7b36837d5ed337c4a319\");\n          searchClient.initIndex(\"jobs_prod\"); \n          const search = instantsearch({\n            indexName: \"jobs_prod\",\n            searchClient,\n          });\n          search.addWidget(\n            instantsearch.widgets.configure({\n              facetFilters: [[\"company_data:FutureSearch\",\"company_data:Elicit\",\"company_data:Forecasting Research Institute\",\"company_data:Pactum AI\",\"company_data:CaseMark\",\"company_data:TheMediator.AI\",\"company_data:AI & Democracy Foundation\",\"company_data:Cooperative AI Foundation\",\"company_data:Safe AI Fund\",\"company_data:Future of Life Foundation\",\"company_data:Fifty Years\"]],\n              hitsPerPage: 10,\n            })\n          );\n          search.addWidget({\n            render(options) {\n              const results = getUniqueCompanyJobs(options.results.hits, 5);\n              results.forEach(item => {\n                item.post_pk = DOMPurify.sanitize(item.post_pk);\n                item.company.logo_url = DOMPurify.sanitize(item.company.logo_url);\n                item.title = DOMPurify.sanitize(item.title);\n                item.company.name = DOMPurify.sanitize(item.company.name);\n                item.card_locations = DOMPurify.sanitize(getLocationString(item.card_locations));\n                item.posted_at_relative = DOMPurify.sanitize(item.posted_at_relative);\n              });\n              container.innerHTML = results.map(item => {\n                return `<\/p>\n<li class=\"vacancy border\">\n                    <a href=\"https:\/\/jobs.80000hours.org\/?jobPk=${item.post_pk}\" target=\"_blank\" rel=\"noopener noreferrer\" class=\"vacancy-summary pt-2 pb-2\"><\/p>\n<div class=\"col-12\">\n<div class=\"row\" style=\"position: relative;\">\n<div class=\"col-sm-8\" style=\"overflow: hidden;\">\n<div class=\"vacancy__org-logo\">\n                              <img decoding=\"async\" src=\"${item.company.logo_url}\">\n                            <\/div>\n<div class=\"vacancy__job-title-and-org-name\">\n<h5 class=\"vacancy__job-title tw--line-clamp-2\">${item.title}<\/h5>\n<p class=\"vacancy__org-name tw--line-clamp-2\">${item.company.name}<\/p><\/div><\/div>\n<div class=\"col-sm-4 text-right hidden-xs vacancy__location-and-date-listed\">\n<p class=\"pr-1\">${item.card_locations}<br \/>${item.posted_at_relative}<\/p><\/div><\/div><\/div>\n<p>                    <\/a>\n                  <\/li>\n<p>`;\n              }).join(\"\");\n            }\n          });\n          search.start();\n        }\n      });\n    <\/script><\/p>\n<ul id=\"vacancies-1\" class=\"!tw--p-0 no-visited-styling disable-url-preview-on-hover-for-descendants\"><\/ul>\n<p><a href=https:\/\/jobs.80000hours.org\/?refinementList%5Bcompany_data%5D%5B0%5D=FutureSearch&refinementList%5Bcompany_data%5D%5B1%5D=Elicit&refinementList%5Bcompany_data%5D%5B2%5D=Forecasting%20Research%20Institute&refinementList%5Bcompany_data%5D%5B3%5D=Pactum%20AI&refinementList%5Bcompany_data%5D%5B4%5D=CaseMark&refinementList%5Bcompany_data%5D%5B5%5D=TheMediator.AI&refinementList%5Bcompany_data%5D%5B6%5D=AI%20%26%20Democracy%20Foundation&refinementList%5Bcompany_data%5D%5B7%5D=Cooperative%20AI%20Foundation&refinementList%5Bcompany_data%5D%5B8%5D=Safe%20AI%20Fund&refinementList%5Bcompany_data%5D%5B9%5D=Future%20of%20Life%20Foundation&refinementList%5Bcompany_data%5D%5B10%5D=Fifty%20Years class=\"btn btn-primary\" target=\"_blank\">View all opportunities<\/a><\/p>\n<div class=\"well bg-gray-lighter margin-bottom margin-top padding-top-small padding-bottom-small\">\n<h4>Want one-on-one advice on pursuing this path?<\/h4>\n<p>If you think this path might be a great option for you, our team might be able to advise you on your next steps.<\/p>\n<p>We can help you compare options, make connections, and possibly even help you find jobs or funding opportunities.<\/p>\n<p><a href=\"https:\/\/80000hours.org\/advising\/\" title=\"\" class=\"btn btn-primary\">Apply for advising<\/a><\/p>\n<\/div>\n<h2><span id=\"learn-more\" class=\"toc-anchor\"><\/span>Learn more<\/h2>\n<ul>\n<li><a href=\"https:\/\/www.forethought.org\/research\/ai-tools-for-existential-security\">AI tools for existential security<\/a> by Forethought, and its <a href=\"https:\/\/www.forethought.org\/research\/appendices-to-ai-tools-for-existential-security#appendix-1-on-whether-accelerating-applications-could-be-bad-via-speeding-up-ai-progress-in-general\">appendices<\/a> <\/li>\n<li><a href=\"https:\/\/www.forethought.org\/research\/design-sketches-for-a-more-sensible-world\"><em>Design sketches for a more sensible world<\/em><\/a>, an article series by Forethought<\/li>\n<li><a href=\"https:\/\/lukasfinnveden.substack.com\/p\/whats-important-in-ai-for-epistemics\">What&#8217;s important in &#8220;AI for epistemics&#8221;?<\/a> by Lukas Finnveden <\/li>\n<li><a href=\"https:\/\/benjamintodd.substack.com\/p\/the-most-interesting-startup-idea\">The most interesting startup idea I&#8217;ve seen recently<\/a> by Ben Todd<\/li>\n<li><a href=\"https:\/\/aiprospects.substack.com\/p\/security-without-dystopia-new-options\">Security without dystopia: structured transparency<\/a> by Eric Drexler<\/li>\n<li><a href=\"https:\/\/arxiv.org\/abs\/2012.08630\">Open problems in cooperative AI<\/a> by Allan Dafoe, Edward Hughes, Yoram Bachrach, Tantum Collins, Kevin R. McKee, Joel Z. Leibo, Kate Larson, and Thore Graepel<\/li>\n<\/ul>\n<p>For specific project ideas, we recommend Lukas Finnveden&#8217;s <a href=\"https:\/\/lukasfinnveden.substack.com\/p\/project-ideas-epistemics\">list of ideas for AI &#8220;epistemics&#8221; projects<\/a> and the options highlighted in Forethought&#8217;s series of &#8220;design sketches&#8221; (linked above). We also think the examples listed on the <a href=\"https:\/\/www.flf.org\/fellowship#:~:text=But%20seriously%20%E2%80%94%20what%20will%20people%20be%20working%20on%3F\">Future of Life Foundation fellowship page<\/a> are great, although applications to this fellowship are now closed.<\/p>\n<p>You can also read our related article on <a href=\"https:\/\/80000hours.org\/problem-profiles\/improving-institutional-decision-making\/\">improving decision making in key institutions<\/a>. It was first published in 2017, before the rise of generative AI \u2014 so it doesn&#8217;t explore AI tools in particular, but does make the broader case for improving societal decision making.<\/p>\n<p>Finally, if you want to understand why we think wisely navigating the transition to a world with advanced AI could be so critical for humanity, check out our full argument for this <a href=\"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/?v=1\">here<\/a>.<\/p>\n<h2><span id=\"acknowledgements\" class=\"toc-anchor\"><\/span>Acknowledgements<\/h2>\n<p><em>This profile draws extensively from Forethought&#8217;s article <a href=\"https:\/\/www.forethought.org\/research\/ai-tools-for-existential-security\">&#8220;AI Tools for Existential Security&#8221;<\/a>.<\/em><\/p>\n<p><em>Many thanks to Arden Koehler, Lizka Vaintrob, Niel Bowerman, Max Dalton, and Rose Hadshar for input.<\/em><\/p>\n","protected":false},"author":471,"featured_media":90965,"parent":0,"menu_order":0,"template":"","meta":{"_acf_changed":false,"footnotes":""},"categories":[1353,1443,1386,1315,1208,1425,1385,1362],"class_list":["post-92247","problem_profile","type-problem_profile","status-publish","has-post-thumbnail","hentry","category-ai","category-ai-enhanced-decision-making","category-career-paths","category-career-planning","category-exploration","category-global-priorities-research-career-paths","category-research","category-skills-skill-building-and-career-capital-2"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Using AI to enhance societal decision making | 80,000 Hours<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/80000hours.org\/problem-profiles\/ai-enhanced-decision-making\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Using AI to enhance societal decision making | 80,000 Hours\" \/>\n<meta property=\"og:url\" content=\"https:\/\/80000hours.org\/problem-profiles\/ai-enhanced-decision-making\/\" \/>\n<meta property=\"og:site_name\" content=\"80,000 Hours\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/80000Hours\" \/>\n<meta property=\"article:modified_time\" content=\"2026-04-21T13:03:08+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/06\/The_School_of_Athens__by_Raffaello_Sanzio_da_Urbino-1-scaled.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1593\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@80000hours\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"18 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/ai-enhanced-decision-making\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/ai-enhanced-decision-making\\\/\"},\"author\":{\"name\":\"Zershaaneh Qureshi\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/person\\\/46f90c575aa95e68cce975168cf4f0f5\"},\"headline\":\"Using AI to enhance societal decision&nbsp;making\",\"datePublished\":\"2025-09-17T17:17:41+00:00\",\"dateModified\":\"2026-04-21T13:03:08+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/ai-enhanced-decision-making\\\/\"},\"wordCount\":3683,\"publisher\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/ai-enhanced-decision-making\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/The_School_of_Athens__by_Raffaello_Sanzio_da_Urbino-1-scaled.jpg\",\"articleSection\":[\"AI\",\"AI-enhanced decision making\",\"Career paths\",\"Career planning\",\"Exploration\",\"Global priorities research\",\"Research\",\"Skills\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/ai-enhanced-decision-making\\\/\",\"url\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/ai-enhanced-decision-making\\\/\",\"name\":\"Using AI to enhance societal decision making | 80,000 Hours\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/ai-enhanced-decision-making\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/ai-enhanced-decision-making\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/The_School_of_Athens__by_Raffaello_Sanzio_da_Urbino-1-scaled.jpg\",\"datePublished\":\"2025-09-17T17:17:41+00:00\",\"dateModified\":\"2026-04-21T13:03:08+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/ai-enhanced-decision-making\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/ai-enhanced-decision-making\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/ai-enhanced-decision-making\\\/#primaryimage\",\"url\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/The_School_of_Athens__by_Raffaello_Sanzio_da_Urbino-1-scaled.jpg\",\"contentUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/The_School_of_Athens__by_Raffaello_Sanzio_da_Urbino-1-scaled.jpg\",\"width\":2560,\"height\":1593,\"caption\":\"\\\"The School of Athens\\\" by Renaissance artist Raffaello Sanzio da Urbino.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/problem-profiles\\\/ai-enhanced-decision-making\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/80000hours.org\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Using AI to enhance societal decision&nbsp;making\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#website\",\"url\":\"https:\\\/\\\/80000hours.org\\\/\",\"name\":\"80,000 Hours\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/80000hours.org\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\",\"name\":\"80,000 Hours\",\"url\":\"https:\\\/\\\/80000hours.org\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2018\\\/07\\\/og-logo_0.png\",\"contentUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2018\\\/07\\\/og-logo_0.png\",\"width\":1500,\"height\":785,\"caption\":\"80,000 Hours\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/80000Hours\",\"https:\\\/\\\/x.com\\\/80000hours\",\"https:\\\/\\\/www.youtube.com\\\/user\\\/eightythousandhours\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/person\\\/46f90c575aa95e68cce975168cf4f0f5\",\"name\":\"Zershaaneh Qureshi\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/6cd1e78a791c368b871b2dc5101c12663bd2f74a621ec3ddaef2dfc9f5078612?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/6cd1e78a791c368b871b2dc5101c12663bd2f74a621ec3ddaef2dfc9f5078612?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/6cd1e78a791c368b871b2dc5101c12663bd2f74a621ec3ddaef2dfc9f5078612?s=96&d=mm&r=g\",\"caption\":\"Zershaaneh Qureshi\"},\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/zershaaneh-qureshi-8744131b4\\\/\"],\"url\":\"https:\\\/\\\/80000hours.org\\\/author\\\/zershaaneh-qureshi\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Using AI to enhance societal decision making | 80,000 Hours","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/80000hours.org\/problem-profiles\/ai-enhanced-decision-making\/","og_locale":"en_US","og_type":"article","og_title":"Using AI to enhance societal decision making | 80,000 Hours","og_url":"https:\/\/80000hours.org\/problem-profiles\/ai-enhanced-decision-making\/","og_site_name":"80,000 Hours","article_publisher":"https:\/\/www.facebook.com\/80000Hours","article_modified_time":"2026-04-21T13:03:08+00:00","og_image":[{"width":2560,"height":1593,"url":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/06\/The_School_of_Athens__by_Raffaello_Sanzio_da_Urbino-1-scaled.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_site":"@80000hours","twitter_misc":{"Est. reading time":"18 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/80000hours.org\/problem-profiles\/ai-enhanced-decision-making\/#article","isPartOf":{"@id":"https:\/\/80000hours.org\/problem-profiles\/ai-enhanced-decision-making\/"},"author":{"name":"Zershaaneh Qureshi","@id":"https:\/\/80000hours.org\/#\/schema\/person\/46f90c575aa95e68cce975168cf4f0f5"},"headline":"Using AI to enhance societal decision&nbsp;making","datePublished":"2025-09-17T17:17:41+00:00","dateModified":"2026-04-21T13:03:08+00:00","mainEntityOfPage":{"@id":"https:\/\/80000hours.org\/problem-profiles\/ai-enhanced-decision-making\/"},"wordCount":3683,"publisher":{"@id":"https:\/\/80000hours.org\/#organization"},"image":{"@id":"https:\/\/80000hours.org\/problem-profiles\/ai-enhanced-decision-making\/#primaryimage"},"thumbnailUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/06\/The_School_of_Athens__by_Raffaello_Sanzio_da_Urbino-1-scaled.jpg","articleSection":["AI","AI-enhanced decision making","Career paths","Career planning","Exploration","Global priorities research","Research","Skills"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/80000hours.org\/problem-profiles\/ai-enhanced-decision-making\/","url":"https:\/\/80000hours.org\/problem-profiles\/ai-enhanced-decision-making\/","name":"Using AI to enhance societal decision making | 80,000 Hours","isPartOf":{"@id":"https:\/\/80000hours.org\/#website"},"primaryImageOfPage":{"@id":"https:\/\/80000hours.org\/problem-profiles\/ai-enhanced-decision-making\/#primaryimage"},"image":{"@id":"https:\/\/80000hours.org\/problem-profiles\/ai-enhanced-decision-making\/#primaryimage"},"thumbnailUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/06\/The_School_of_Athens__by_Raffaello_Sanzio_da_Urbino-1-scaled.jpg","datePublished":"2025-09-17T17:17:41+00:00","dateModified":"2026-04-21T13:03:08+00:00","breadcrumb":{"@id":"https:\/\/80000hours.org\/problem-profiles\/ai-enhanced-decision-making\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/80000hours.org\/problem-profiles\/ai-enhanced-decision-making\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/problem-profiles\/ai-enhanced-decision-making\/#primaryimage","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/06\/The_School_of_Athens__by_Raffaello_Sanzio_da_Urbino-1-scaled.jpg","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2025\/06\/The_School_of_Athens__by_Raffaello_Sanzio_da_Urbino-1-scaled.jpg","width":2560,"height":1593,"caption":"\"The School of Athens\" by Renaissance artist Raffaello Sanzio da Urbino."},{"@type":"BreadcrumbList","@id":"https:\/\/80000hours.org\/problem-profiles\/ai-enhanced-decision-making\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/80000hours.org\/"},{"@type":"ListItem","position":2,"name":"Using AI to enhance societal decision&nbsp;making"}]},{"@type":"WebSite","@id":"https:\/\/80000hours.org\/#website","url":"https:\/\/80000hours.org\/","name":"80,000 Hours","description":"","publisher":{"@id":"https:\/\/80000hours.org\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/80000hours.org\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/80000hours.org\/#organization","name":"80,000 Hours","url":"https:\/\/80000hours.org\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","width":1500,"height":785,"caption":"80,000 Hours"},"image":{"@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/80000Hours","https:\/\/x.com\/80000hours","https:\/\/www.youtube.com\/user\/eightythousandhours"]},{"@type":"Person","@id":"https:\/\/80000hours.org\/#\/schema\/person\/46f90c575aa95e68cce975168cf4f0f5","name":"Zershaaneh Qureshi","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/6cd1e78a791c368b871b2dc5101c12663bd2f74a621ec3ddaef2dfc9f5078612?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/6cd1e78a791c368b871b2dc5101c12663bd2f74a621ec3ddaef2dfc9f5078612?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/6cd1e78a791c368b871b2dc5101c12663bd2f74a621ec3ddaef2dfc9f5078612?s=96&d=mm&r=g","caption":"Zershaaneh Qureshi"},"sameAs":["https:\/\/www.linkedin.com\/in\/zershaaneh-qureshi-8744131b4\/"],"url":"https:\/\/80000hours.org\/author\/zershaaneh-qureshi\/"}]}},"_links":{"self":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile\/92247","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile"}],"about":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/types\/problem_profile"}],"author":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/users\/471"}],"version-history":[{"count":6,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile\/92247\/revisions"}],"predecessor-version":[{"id":96119,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/problem_profile\/92247\/revisions\/96119"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media\/90965"}],"wp:attachment":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media?parent=92247"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/categories?post=92247"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}