{"id":74634,"date":"2022-02-10T16:35:21","date_gmt":"2022-02-10T16:35:21","guid":{"rendered":"https:\/\/80000hours.org\/?post_type=career_profile&#038;p=74634"},"modified":"2026-04-07T14:08:07","modified_gmt":"2026-04-07T14:08:07","slug":"china-related-ai-safety-and-governance-paths","status":"publish","type":"career_profile","link":"https:\/\/80000hours.org\/career-reviews\/china-related-ai-safety-and-governance-paths\/","title":{"rendered":"China-related AI safety and governance&nbsp;paths"},"content":{"rendered":"<p>Expertise in China and its relations with the world might be critical in tackling some of the world&#8217;s most pressing problems. In particular, China&#8217;s relationship with the US is arguably the most important bilateral relationship in the world, with these two countries collectively accounting for over 40% of global GDP. These considerations led us to publish a guide to <a href=\"https:\/\/80000hours.org\/articles\/china-careers\/\">improving China\u2013Western coordination on global catastrophic risks and other key problems<\/a> in 2018. Since then, we have seen an increase in the number of people exploring this area.<\/p>\n<p>China is one of the most important countries developing and shaping advanced artificial intelligence (AI). The Chinese government&#8217;s spending on AI research and development is estimated to be on the same order of magnitude as that of the US government, and China&#8217;s AI research is <a href=\"https:\/\/oecd.ai\/en\/data-from-partners?selectedArea=ai-research&amp;selectedVisualization=ai-publications-by-country-over-time\">prominent on the world stage and growing<\/a>.<\/p>\n<p>Because of <a href=\"\/problem-profiles\/positively-shaping-artificial-intelligence\/\">the importance of AI from the perspective of improving the long-run trajectory of the world<\/a>, we think relations between China and the US on AI could be among the most important aspects of their relationship. Insofar as the EU and\/or UK influence advanced AI development through labs based in their countries or through their influence on global regulation, the state of understanding and coordination between European and Chinese actors on AI safety and governance could also be significant.<\/p>\n<p>That, in short, is why we think working on AI safety and governance in China and\/or building mutual understanding between Chinese and Western actors in these areas is likely to be one of the most promising China-related career paths. Below we provide more arguments and detailed information on this option.<\/p>\n<p>If you are interested in pursuing a career path described in this profile, <a href=\"\/speak-with-us\/?int_campaign=career-review\">contact 80,000 Hours&#8217; one-on-one team<\/a> and we may be able to put you in touch with a specialist advisor.<\/p>\n<div id=\"toc_container\" class=\"toc_white no_bullets\"><p class=\"toc_title\">Table of Contents<\/p><ul class=\"toc_list\"><li><a href=\"#why-pursue-this-path\"><span class=\"toc_number toc_depth_1\">1<\/span> Why pursue this path?<\/a><ul><li><a href=\"#1-safely-managing-the-introduction-of-ai-may-require-unprecedented-international-coordination\"><span class=\"toc_number toc_depth_2\">1.1<\/span> 1. Safely managing the introduction of AI may require unprecedented international coordination.<\/a><\/li><li><a href=\"#2-coordination-between-china-and-the-us-on-ai-safety-would-require-deliberate-effort\"><span class=\"toc_number toc_depth_2\">1.2<\/span> 2. Coordination between China and the US on AI safety would require deliberate effort.<\/a><\/li><li><a href=\"#3-there-is-growing-interest-in-ai-safety-and-governance-in-china\"><span class=\"toc_number toc_depth_2\">1.3<\/span> 3. There is growing interest in AI safety and governance in China.<\/a><\/li><li><a href=\"#4-you-could-help-pioneer-the-subfield-of-chinawestern-coordination-on-ai-safety-and-governance\"><span class=\"toc_number toc_depth_2\">1.4<\/span> 4. You could help pioneer the subfield of China\u2013Western coordination on AI safety and governance.<\/a><\/li><li><a href=\"#5-gaining-and-leveraging-expertise-on-china-is-a-promising-option-in-general\"><span class=\"toc_number toc_depth_2\">1.5<\/span> 5. Gaining and leveraging expertise on China is a promising option in general.<\/a><\/li><\/ul><\/li><li><a href=\"#arguments-against\"><span class=\"toc_number toc_depth_1\">2<\/span> Arguments against pursuing this path<\/a><ul><li><a href=\"#1-its-far-from-guaranteed-youd-have-a-large-impact-in-this-career\"><span class=\"toc_number toc_depth_2\">2.1<\/span> 1. It&#8217;s far from guaranteed you&#8217;d have a large impact in this career.<\/a><\/li><li><a href=\"#2-there-is-a-possibility-of-doing-harm\"><span class=\"toc_number toc_depth_2\">2.2<\/span> 2. There is a possibility of doing harm.<\/a><\/li><li><a href=\"#3-there-are-potential-abuses-of-ai-enabled-technology\"><span class=\"toc_number toc_depth_2\">2.3<\/span> 3. There are potential abuses of AI-enabled technology.<\/a><\/li><li><a href=\"#international-politics\"><span class=\"toc_number toc_depth_2\">2.4<\/span> 4. There are complex international political considerations and uncertainties.<\/a><\/li><\/ul><\/li><li><a href=\"#do-i-need-to-be-chinese-to-pursue-these-paths\"><span class=\"toc_number toc_depth_1\">3<\/span> Do I need to be Chinese to pursue these paths?<\/a><\/li><li><a href=\"#ideas-for-where-to-aim-longer-term\"><span class=\"toc_number toc_depth_1\">4<\/span> Ideas for where to aim longer term<\/a><\/li><li><a href=\"#alternative-routes-that-we-havent-investigated\"><span class=\"toc_number toc_depth_1\">5<\/span> Alternative routes that we haven&#8217;t investigated<\/a><\/li><li><a href=\"#personal-fit\"><span class=\"toc_number toc_depth_1\">6<\/span> Personal fit<\/a><ul><li><a href=\"#bilingualism-andor-cross-cultural-communication-skills\"><span class=\"toc_number toc_depth_2\">6.1<\/span> Bilingualism and\/or cross-cultural communication skills<\/a><\/li><li><a href=\"#strong-networking-abilities-especially-for-policy-roles\"><span class=\"toc_number toc_depth_2\">6.2<\/span> Strong networking abilities (especially for policy roles)<\/a><\/li><li><a href=\"#good-judgement-and-prudence\"><span class=\"toc_number toc_depth_2\">6.3<\/span> Good judgement and prudence<\/a><\/li><\/ul><\/li><li><a href=\"#learn-more\"><span class=\"toc_number toc_depth_1\">7<\/span> Learn more<\/a><\/li><li><a href=\"#additional-resources\"><span class=\"toc_number toc_depth_1\">8<\/span> Additional resources<\/a><ul><li><a href=\"#top-recommendations\"><span class=\"toc_number toc_depth_2\">8.1<\/span> Top recommendations<\/a><\/li><li><a href=\"#further-recommendations\"><span class=\"toc_number toc_depth_2\">8.2<\/span> Further recommendations<\/a><\/li><\/ul><\/li><\/ul><\/div>\n<div class=\"panel clearfix \">\n<p><strong>In a nutshell:<\/strong> We&#8217;d be excited to see more people build expertise to do work in or related to China in order to reduce <a href=\"\/problem-profiles\/artificial-intelligence\">long-term risks associated with the development and use of AI<\/a>.<\/p>\n<p>There are arguments in favour and against pursuing this career path: on the one hand, effective coordination is imperative for avoiding dangerous conflicts and racing dynamics; on the other hand, there are major risks and complexities involved in this path.<\/p>\n<p>Some promising career paths to aim for include:<\/p>\n<ul>\n<li>Technical AI safety research<\/li>\n<li>Safety policy advising in an AI lab<\/li>\n<li>Research at a think tank or long-term focused research group<\/li>\n<li>Translation and publication advising <\/li>\n<\/ul>\n<p>Personal fit for these roles both in the West and in China depends heavily on experience, networking ability, language, and citizenship.<\/p>\n<\/div>\n<div class=\"border tw--rounded-md tw--mb-8\" >\n<div class=\"tw--bg-off-white tw--px-3.5 tw--py-5\">\n<h3 class=\"no-toc\"> Sometimes recommended \u2014 personal fit dependent<\/h3>\n<p>This career will be some people's highest-impact option if their personal fit is especially good.\n<\/p><\/div>\n<div class=\"tw--px-3.5 tw--py-5\">\n<h4 class=\"tw--text-base\">Review status<\/h4>\n<p>Based on a shallow investigation&nbsp;<i class=\n                  \"fas fa-question-circle text-primary icon-tooltip career-tooltip\" data-placement=\"right\"\n                  data-toggle=\"tooltip\" title=\n                  \"We\u2019ve interviewed several people with relevant expertise about this path and spent at least half a day reading the best existing advice (or similar). The purpose of these profiles is to help our readers prioritise further investigation -- our conclusions do not represent fully considered views.\"><br \/>\n                  <\/i><\/p><\/div><\/div>\n<h2><span id=\"why-pursue-this-path\" class=\"toc-anchor\"><\/span>Why pursue this path?<\/h2>\n<h3><span id=\"1-safely-managing-the-introduction-of-ai-may-require-unprecedented-international-coordination\" class=\"toc-anchor\"><\/span>1. Safely managing the introduction of AI may require unprecedented international coordination.<\/h3>\n<p><a href=\"https:\/\/80000hours.org\/problem-profiles\/positively-shaping-artificial-intelligence\/\">As discussed elsewhere<\/a>, without careful design, AI might act in ways unintended by humans, with potentially catastrophic effects. Even if we can control advanced AI, it could be very economically and socially disruptive (in good or bad ways), and could be used as a destabilising weapon of war.<\/p>\n<p>Because reducing existential risks from advanced AI benefits the whole world equally, it has the character of a global public goods problem. Steps to reduce existential risk tend to be undersupplied by the market, since each actor that can take such steps could capture only a small portion of the value (even if the actor is a large country) &#8212; while bearing all the costs (e.g. slower local AI progress).<\/p>\n<p>Past experience shows the provision of global (and intergenerational) public goods <em>is<\/em> possible with enough effort and coordination &#8212; for instance, in the case of <a href=\"https:\/\/www.tandfonline.com\/doi\/abs\/10.1080\/12294659.2001.10804981?journalCode=rrpa20\">ozone layer protection<\/a>.<\/p>\n<p>However, coordination to ensure safe advanced AI may be more challenging to achieve. This is partly because AI is anticipated to bring a lot of advantages, and actors such as governments or companies around the world might be motivated to be the first to develop and deploy advanced AI systems in order to capture most of these benefits. With actors competing for speed and\/or performance, they may cut corners on safety, leading to a &#8220;race to the bottom.&#8221;<\/p>\n<p>Unprecedented global coordination may therefore be required, and it seems beneficial to have a lot of people doing thoughtful work to encourage this.<\/p>\n<h3><span id=\"2-coordination-between-china-and-the-us-on-ai-safety-would-require-deliberate-effort\" class=\"toc-anchor\"><\/span>2. Coordination between China and the US on AI safety would require deliberate effort.<\/h3>\n<p>It&#8217;s a challenging period for China\u2013US relations, including in the area of AI. There are a couple of theoretical reasons to think that these relations may remain strained:<\/p>\n<ul>\n<li><strong>Power dynamics:<\/strong> according to <a href=\"https:\/\/www.oxfordbibliographies.com\/view\/document\/obo-9780199743292\/obo-9780199743292-0038.xml\">power transition theory<\/a>, there is potential for conflict when a dominant nation and a challenger reach relative equivalence of power, especially when the challenger is dissatisfied with the status quo. There is significant debate over whether this theory is robust and whether it applies to China\u2013US relations in the 21st century. However, if both of these conditions are true, it suggests great effort may be required to avoid conflict and engender coordination between the two countries.<\/li>\n<li><strong>Differences in ideology and regime type:<\/strong> political leaders in China and the US diverge on a number of ideas about governance, values, and legitimate uses of technology. These divides can make it difficult for actors to see one another&#8217;s perspectives and come to agreements.<\/li>\n<\/ul>\n<h3><span id=\"3-there-is-growing-interest-in-ai-safety-and-governance-in-china\" class=\"toc-anchor\"><\/span>3. There is growing interest in AI safety and governance in China.<\/h3>\n<p>The number of organisations doing potentially promising work on AI safety and governance in and with China has increased since we published our <a href=\"https:\/\/80000hours.org\/articles\/china-careers\/\">earlier guide on China-related careers<\/a> in February 2018.<\/p>\n<p>For example, major academic labs like the <a href=\"https:\/\/arxiv.org\/pdf\/2507.16534\">Shanghai Artificial Intelligence Laboratory<\/a> and <a href=\"https:\/\/pair-lab.ai\/\">Peking University&#8217;s Alignment and Interaction Research Lab<\/a> are now publishing AI safety research on frontier systems and the risks of AGI \u2014 often with ideas and approaches that align with those from Western AI safety organisations. In recent years, the Beijing Academy of AI (set up in late 2018) has been <a href=\"https:\/\/www.iaps.ai\/research\/china-aisi-counterparts\">promoting international coordination on AI safety issues<\/a> through its conferences.  And in 2024, the <a href=\"https:\/\/ai-development-and-safety-network.cn\/\">China AI Development and Safety Association<\/a> was established by a group of leading institutions to unite Chinese academia, industry, and think tanks in pursuit of safe AI development. We provide more examples of organisations doing relevant work <a href=\"#example-organisations\">below<\/a>.<\/p>\n<p>In general, decision makers and AI experts in China also appear to be paying more attention to AI safety and governance issues:<\/p>\n<ul>\n<li>China&#8217;s <a href=\"https:\/\/www.cac.gov.cn\/2023-07\/13\/c_1690898327029107.htm\">first comprehensive regulations for generative AI<\/a> came into force in 2023, mandating security assessments and algorithm filing for generative AI companies. <\/li>\n<li>China&#8217;s <a href=\"https:\/\/carnegieendowment.org\/research\/2025\/10\/how-china-views-ai-risks-and-what-to-do-about-them\">&#8216;AI governance framework 2.0&#8217;<\/a>, released in 2025, acknowledges serious threats from AI \u2014 like  loss of human control and the increased chance of chemical, biological, radiological, and nuclear (CBRN) weapons being misused. <\/li>\n<li>An international dialogue on AI safety issues including <a href=\"https:\/\/80000hours.org\/problem-profiles\/risks-from-power-seeking-ai\/\">misalignment<\/a> was held in Shanghai in 2025. Its attendees \u2014 including leading Chinese AI scientists and governance experts \u2014 agreed that <a href=\"https:\/\/idais.ai\/dialogue\/idais-shanghai\/\">&#8220;some AI systems today already demonstrate the capability and propensity to undermine their creators&#8217; safety and control efforts.&#8221;<\/a> <\/li>\n<li>Chinese premier Qiang Li has <a href=\"https:\/\/www.reuters.com\/world\/china\/china-proposes-new-global-ai-cooperation-organisation-2025-07-26\/\">proposed a new organisation<\/a> that would foster international coordination on regulating advanced AI. <\/li>\n<\/ul>\n<p>Compared to a hypothetical scenario in which Chinese actors are not concerned about AI safety, these developments suggests that this problem is somewhat tractable. That said, we are still highly uncertain about the overall tractability of increasing coordination to reduce AI risk.<\/p>\n<h3><span id=\"4-you-could-help-pioneer-the-subfield-of-chinawestern-coordination-on-ai-safety-and-governance\" class=\"toc-anchor\"><\/span>4. You could help pioneer the subfield of China\u2013Western coordination on AI safety and governance.<\/h3>\n<p>Despite the upward trend in work on AI safety and governance in general (including in China), work in these fields still seems highly neglected.<\/p>\n<p>Based on the number of employees at <a href=\"https:\/\/80000hours.org\/problem-profiles\/artificial-intelligence\/#neglectedness\">relevant organisations we have identified in the field of technical AI safety<\/a>, there are likely to be only a few hundred full-time technical AI safety researchers worldwide.<\/p>\n<p>The pool of those working in AI governance is likely to be broader, since it encompasses <a href=\"https:\/\/80000hours.org\/articles\/ai-policy-guide\/#what-are-the-roles-you-want-to-aim-for\">both policy practitioners and researchers<\/a>, but our best guess is that the number of people working full-time in the field of AI governance worldwide is still less than 1,000. And we&#8217;d guess that of these, less than 10% are actively working towards improving China\u2013Western coordination. This would come to less than 100 people.<\/p>\n<p>These are all rough figures and, as stated above, the field is quickly evolving. Nonetheless, it seems fair to conclude that as of 2021, only around 100 people are working to improve China\u2013Western coordination on AI safety and governance on a full-time basis. This seems to be a very small number of people given the magnitude of the subproblem of China\u2013Western coordination on AI. Therefore, all else equal, the marginal impact of additional people working on this problem should be high (though see <a href=\"#arguments-against\">some caveats below<\/a>.)<\/p>\n<h3><span id=\"5-gaining-and-leveraging-expertise-on-china-is-a-promising-option-in-general\" class=\"toc-anchor\"><\/span>5. Gaining and leveraging expertise on China is a promising option in general.<\/h3>\n<p>As we talked about in our <a href=\"https:\/\/80000hours.org\/articles\/china-careers\/\">earlier guide to China-related careers<\/a>, China has a crucial role in almost all our <a href=\"https:\/\/80000hours.org\/problem-profiles\/#the-kinds-of-issues-we-currently-prioritize-most-highly\">priority problem areas<\/a>. This flows largely from it being home to a fifth of the world&#8217;s population, its status as the world&#8217;s second-largest economy, and its significance as a nuclear and military power. Despite this, there is relatively poor understanding and coordination between China and the West.<\/p>\n<p>In terms of <a href=\"https:\/\/80000hours.org\/articles\/career-capital\/\">career capital<\/a>, there are lots of backup options if you pursue a path of improving China\u2013Western coordination on AI. Many people recognise the importance of improving China\u2013Western coordination more broadly. In the West, there is a lot of interest in China affairs in general, and China expertise is only growing in importance and demand. So, even if your plans for improving coordination on AI don&#8217;t work out, you can still use the aptitude you will have gained in China\u2013Western relations to help solve other pressing problems.<\/p>\n<h2><span id=\"arguments-against\" class=\"toc-anchor\"><\/span>Arguments against pursuing this path<\/h2>\n<h3><span id=\"1-its-far-from-guaranteed-youd-have-a-large-impact-in-this-career\" class=\"toc-anchor\"><\/span>1. It&#8217;s far from guaranteed you&#8217;d have a large impact in this career.<\/h3>\n<p>Like career paths in <a href=\"https:\/\/80000hours.org\/articles\/us-ai-policy\/#your-social-impact-in-this-career-path-is-high-risk-high-reward\">US AI policy<\/a>, to have outsized impact in this path you would need to:<\/p>\n<ul>\n<li><strong>Be in the right place at the right time.<\/strong> In other words, be in a position to influence key decisions about the design or governance of AI systems when AI technologies are advanced enough that those decisions need to be made.<\/li>\n<li><strong>Have the good judgement and sufficient expertise to influence those decisions in a positive direction.<\/strong> It can be difficult to predict the expertise that will be needed in advance, or to know what actions will have the best long-term impact. Figuring these things out is a lot of the value you would contribute \u2014 and it won&#8217;t be easy.<\/li>\n<\/ul>\n<p>Plus:<\/p>\n<ul>\n<li><strong>The development and societal integration of AI would need to have major potential downsides and\/or upsides.<\/strong> If future technical or political developments are such that the potential for negative consequences or lost value from advanced AI is <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/ben-garfinkel-classic-ai-risk-arguments\/\">not as significant as it currently seems<\/a>, AI safety and governance would become a less important area to work on. <\/li>\n<li><strong>Coordination problems would need to be solvable, but not solved by default.<\/strong> If we are unable to make progress on international coordination issues around advanced AI systems (for instance, because the benefits of being ahead in its development are too substantial), then improving coordination becomes less tractable. Conversely, working on this problem has less impact if the problem is likely to solve itself through some other mechanism.<\/li>\n<\/ul>\n<p>Meeting all four of these conditions in concert seems somewhat unlikely, so we think the median outcome for an individual working on China\u2013Western coordination on AI safety and security will be to have little impact.<\/p>\n<p>However, in scenarios in which all four of the above are satisfied, your impact could be very large indeed, and so we still recommend this career because of its large <a href=\"https:\/\/80000hours.org\/articles\/expected-value\/\">expected impact<\/a> and good backup options.<\/p>\n<h3><span id=\"2-there-is-a-possibility-of-doing-harm\" class=\"toc-anchor\"><\/span>2. There is a possibility of doing harm.<\/h3>\n<p>At 80,000 Hours, we encourage people to work on problems that are neglected by others and large in scale. Unfortunately, this often also means that people could accidentally do more damage if things don&#8217;t go well.<\/p>\n<p>It seems to us that China\u2013Western coordination on AI safety and governance is an issue for which this is a danger.<\/p>\n<p><a href=\"https:\/\/80000hours.org\/articles\/accidental-harm\/\">This article<\/a> lists six ways people trying to do good can unintentionally set back their cause. Many of them apply to this career path.<\/p>\n<p>One particularly easy way to have a substantial negative impact is to act unilaterally in contexts in which even one person mistakenly taking a particular action could pose widespread costs to the field or the world as a whole. So if you pursue this path, it seems wise to develop and consult a good network and a strong knowledge base, and to avoid unilateral action even when you think it is justified.<\/p>\n<p>In addition, people pursuing this path could cause harm by supporting the development and deployment of AI technologies in ethically questionable ways (even if unintentionally). This links to the next point.<\/p>\n<h3><span id=\"3-there-are-potential-abuses-of-ai-enabled-technology\" class=\"toc-anchor\"><\/span>3. There are potential abuses of AI-enabled technology.<\/h3>\n<p>Given the powerful economic and national incentives at play, the dual-use nature of the technology, and the complexity of AI supply chains, your efforts to increase beneficial coordination on AI could indirectly cause damage or be used by other interested actors in a harmful way. For example, if you help smooth the way for a certain useful compromise, what harmful uses of AI might result?<\/p>\n<p>People working to develop, procure, or regulate AI in highly charged, international contexts will therefore likely have to grapple with challenging ethical judgement calls with many relevant considerations. For instance:<\/p>\n<ul>\n<li>What is the ratio of expected benefits to harms involved in working on a particular project or at a particular organisation? <\/li>\n<li>Have you factored in the potential risk to your reputation, which could affect your ability to have an impact in the future? <\/li>\n<li>What is the chance that the same or worse would happen if you were not involved? <\/li>\n<\/ul>\n<p>It will be hard to think through these questions, and there is always the possibility of reaching wrong answers.<\/p>\n<h3><span id=\"international-politics\" class=\"toc-anchor\"><\/span>4. There are complex international political considerations and uncertainties.<\/h3>\n<p>The evolution of China\u2013US relations and international politics more broadly introduces constraints and uncertainties in careers aiming to improve China\u2013Western coordination on AI safety and governance.<\/p>\n<p><strong>For those considering technical safety research:<\/strong><\/p>\n<ul>\n<li>In <a href=\"https:\/\/www.nature.com\/articles\/d41586-020-02515-x\">recent years<\/a>, authorities in the US have increased scrutiny of Chinese researchers&#8217; backgrounds. This could affect Chinese nationals who plan to study and work in AI in the US. <\/li>\n<li>Under <a href=\"https:\/\/www.gov.uk\/guidance\/academic-technology-approval-scheme\">UK rules<\/a> since September 2020, students from certain countries (including China) need to undergo security vetting if they wish to pursue postgraduate studies in AI and other &#8220;sensitive subjects&#8221; that could be used in &#8220;Advanced Conventional Military Technology.&#8221;<\/li>\n<\/ul>\n<p><strong>For those considering public sector roles:<\/strong><\/p>\n<ul>\n<li>Studying abroad for over six months may disqualify people from applying for certain roles in the Chinese public sector, such as police positions in the public security or judicial administration systems. <\/li>\n<li>Meanwhile, Helen Toner, the Director of Strategy at the Center for Security and Emerging Technology at Georgetown University, has <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/helen-toner-on-security-and-emerging-technology\/\">suggested<\/a> that spending a significant amount of time in China or maintaining close and ongoing contact with Chinese nationals could make it harder to get a security clearance in the US. <\/li>\n<li>On the other hand, the <a href=\"https:\/\/www.borenawards.org\/eligible-programs#countries\">Boren Awards<\/a> funding programme, which invests in &#8220;linguistic and cultural knowledge for aspiring federal government employees,&#8221; lists China as one of the countries it prefers applicants to study in. <\/li>\n<li>We&#8217;ve been told that having a background in China &#8212; and possibly even just visiting &#8212; could exclude you from some Western government jobs.<\/li>\n<\/ul>\n<p>We are therefore uncertain about the extent to which time in China would damage someone&#8217;s career prospects in governments and think tanks in the US and other Western countries. This will likely depend on the particulars of the case; for instance, individuals who worked for foreign media outlets in China will likely face fewer challenges than those with experience at organisations affiliated with the Chinese government.<\/p>\n<p>It is therefore worth carefully considering the potential implications on your long-term career options before changing your location or organisation. Ideally, you should speak to people inside organisations you might work at to get their views.<\/p>\n<p><strong>For those considering think tank roles:<\/strong><br \/>\nIn March 2021, China introduced sanctions against certain political and academic figures and institutions from Europe and the UK, prohibiting them from travelling to or doing business with China. These measures were a response to sanctions imposed on Chinese officials by the EU and UK (and other Western countries). The inclusion of the <a href=\"https:\/\/merics.org\/en\">Mercator Institute for China Studies<\/a> (Europe&#8217;s leading think tank on China) among those sanctioned suggests that those pursuing China-focused think tank roles in the West could face hurdles as a result of the political environment.<\/p>\n<h2><span id=\"do-i-need-to-be-chinese-to-pursue-these-paths\" class=\"toc-anchor\"><\/span>Do I need to be Chinese to pursue these paths?<\/h2>\n<p>Below, we&#8217;ll lay out some specific career paths you could pursue. But which ones are most suitable for you will depend on your background, including your citizenship.<\/p>\n<p>Being productive and progressing in <em>any<\/em> of these paths will likely be difficult without strong Chinese language skills and cultural familiarity \u2014 at least for roles that are based in China. For example, foreign scientists working in China have said that <a href=\"https:\/\/www.nature.com\/articles\/d41586-018-00540-5\">assimilating into local culture is a tough task<\/a>.<\/p>\n<p>The question of whether you&#8217;ll need to be a Chinese citizen (or have Chinese heritage) for any of these roles is more complicated to answer.<\/p>\n<p>There are rarely <em>hard requirements<\/em> on citizenship here, with the exception of jobs requiring security clearance. But having Chinese citizenship or heritage can be a meaningful advantage in some contexts \u2014 and essentially irrelevant in others. From what we can tell:<\/p>\n<ul>\n<li><strong>Citizenship matters most in roles working closely with the Chinese government.<\/strong> This includes being an <a href=\"#ai-governance-researchers-at-chinese-think-tanks-and-universities\">AI governance researcher at university departments or think tanks<\/a>, since these institutions are typically state-backed and may have direct relationships with specific government departments. <\/li>\n<li><strong>Citizenship is not usually important in roles focused on China-Western coordination.<\/strong> In fact, it&#8217;s often beneficial to have people from a variety of nationalities doing this work. This includes <a href=\"#ai-governance-researchers-at-key-long-term-focused-research-groups\">working at long-term-focused research groups<\/a>, <a href=\"#ai-safety-and-interdisciplinary-researchers-focusing-on-problems-of-cooperation\">doing research focused on cooperation problems<\/a>, and <a href=\"#ai-focused-translators-and-publishing-advisors\">being a translator<\/a>.<\/li>\n<li><strong>There have been recent efforts to get more non-Chinese nationals into technical roles in China<\/strong> \u2014 including <a href=\"#example-organisations\">research and engineering positions at Chinese AI companies<\/a> and <a href=\"#ai-safety-researchers-and-professors-at-top-chinese-ai-academic-labs\">academic labs<\/a>. Notably, China&#8217;s <a href=\"https:\/\/www.china-briefing.com\/news\/chinas-entry-exit-k-visa-rules-2025\/\">K-Visa programme<\/a> (launched October 2025) is designed to enable foreign STEM professionals to work in China with more flexibility and longer periods of stay.<br \/>\nWith the rising political tensions we described above, <strong>there&#8217;s <a href=\"https:\/\/uscnpm.org\/analysis\/fears-of-a-china-initiative-revival-stir-anxiety-among-chinese-american-academics\/\">increased scrutiny<\/a> over people with close ties to China in some US contexts<\/strong> \u2014 even beyond cases where a job requires security clearance. It&#8217;s not clear exactly how this might affect your job prospects, but it may be worth carefully considering the political situation if, for example, you&#8217;re Chinese and thinking about becoming a <a href=\"#china-ai-analysts-at-top-think-tanks-in-the-us-and-the-uk\">China analyst at a US think tank<\/a>. <\/li>\n<\/ul>\n<h2><span id=\"ideas-for-where-to-aim-longer-term\" class=\"toc-anchor\"><\/span>Ideas for where to aim longer term<\/h2>\n<p>In this section we set out some options to aim for, which have been researched enough for us to feel comfortable recommending them. How you should prioritise between these depends mostly on your <a href=\"https:\/\/80000hours.org\/articles\/comparative-advantage\/\">comparative advantage<\/a> and <a href=\"https:\/\/80000hours.org\/articles\/personal-fit\/\">personal fit<\/a> \u2014 along with the considerations on citizenship we highlighted above.<\/p>\n<h3 id=\"example-organisations\" class=\"no-toc\">AI safety researchers and engineers at top Chinese AI companies<\/h3>\n<p>This path would involve doing research on AI safety-related topics, such as <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/paul-christiano-ai-alignment-solutions\/\">alignment, robustness<\/a>, or <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/chris-olah-interpretability-research\/\">interpretability<\/a>, at top Chinese AI companies. This could be a way to use your skills to contribute to safer development and deployment of advanced AI systems \u2014 provided you&#8217;re working under leadership that already cares about the risks (it can be hard to shift attitudes internally otherwise).<\/p>\n<p>You could also potentially help promote AI safety among other researchers, if you are able to publish research or communicate about it at conferences.<\/p>\n<p>In the long run, you could aim to progress to a senior position where you can leverage your connections and expertise to shape industry standards (for example, by providing inputs to policy drafting).<\/p>\n<p>If your comparative advantage and personal fit are such that you would like to aim for this route over technical AI safety roles elsewhere, we recommend first trying to gain career capital. The best way to do this is by studying at top graduate schools, publishing papers in top AI journals, and <a href=\"\/career-reviews\/working-at-an-ai-lab\/\">working at a major AI safety company<\/a> or tech company \u2014 regardless of location.<\/p>\n<p>In China, companies that <em>may<\/em> have relevant positions include:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.deepseek.com\/en\/\">DeepSeek<\/a> <\/li>\n<li><a href=\"https:\/\/qwenlm.github.io\/about\/\">Alibaba&#8217;s Qwen team<\/a>  <\/li>\n<li>The team working on Kimi at <a href=\"https:\/\/www.moonshot.ai\/\">Moonshot AI<\/a><\/li>\n<li>The team working on Hunyuan at <a href=\"https:\/\/www.tencent.com\/zh-cn\/\">Tencent<\/a> <\/li>\n<li><a href=\"https:\/\/www.huawei.com\/en\/technology-insights\/industry-insights\/technology\/ai\">Huawei<\/a>&#8216;s AI unit <\/li>\n<li><a href=\"https:\/\/research.baidu.com\/\">Baidu<\/a> <\/li>\n<li><a href=\"https:\/\/www.sensetime.com\/en\">SenseTime<\/a> <\/li>\n<li><a href=\"https:\/\/z.ai\">Z.ai<\/a> (previously Zhipu.ai), a startup founded out of Tsinghua University <\/li>\n<li><a href=\"https:\/\/www.realai.ai\/\">RealAI<\/a>, a startup backed by the Honorary Dean of Tsinghua University&#8217;s Institute of AI<\/li>\n<\/ul>\n<p>Most of these companies have done at least some work relevant to safety. But working at one of these companies also carries a substantial risk of harm. If you work at one of them, you might be able to reduce AI risk by advancing safety research or aid the adoption of important practices, or you might simply accelerate the technology, worsen race dynamics, and make the situation overall worse.<\/p>\n<p>It&#8217;s worth looking into the specific companies and teams you might be joining, thinking about your own personal fit, and making a bespoke choice. We have a <a href=\"https:\/\/80000hours.org\/career-reviews\/working-at-an-ai-lab\/\">guide to whether you should choose to work at a frontier AI company here<\/a> \u2014 it&#8217;s focused on US companies, but much of what&#8217;s there will also apply to Chinese companies. We would also be happy to <a href=\"\/speak-with-us\/\">speak to people one-on-one about this decision<\/a>.<\/p>\n<p>It&#8217;s also worth noting that some of the companies above \u2014 like Z.ai, Huawei, and SenseTime \u2014 are on <a href=\"https:\/\/en.wikipedia.org\/wiki\/Entity_List\">a list of entities facing trade restrictions from the US<\/a>. How this affects your career in the long-run will depend on your circumstances, but it&#8217;s possible that working at one of these companies would make it harder for you to obtain US security clearances later on, or collaborate on sensitive projects with US partners. You should factor this in when deciding where to work.<\/p>\n<p>In addition to research credentials and good judgement, people pursuing this path need patience and strong interpersonal skills to be able to progress to influential positions from which they can shape industry standards.<\/p>\n<h3 id=\"ai-safety-researchers-and-professors-at-top-chinese-ai-academic-labs\" class=\"no-toc\">AI safety researchers and professors at top Chinese AI academic labs<\/h3>\n<p>These roles would also involve doing research related to AI safety, but at top Chinese university labs. This could be valuable both for making progress on technical safety problems and for encouraging interest in AI safety among other researchers &#8212; especially if you progress to take on teaching or supervisory responsibilities. You might even be able to contribute to AI policy drafting once you&#8217;re senior enough, since many university departments will have close ties to relevant government departments.<\/p>\n<p>On top of that, people in these roles sometimes have opportunities to collaborate with AI companies on safety work.<\/p>\n<p>Early in your career, you&#8217;ll want to target academic labs with existing safety work. Options include:<\/p>\n<ul>\n<li>Peking University&#8217;s <a href=\"https:\/\/pair-lab.ai\/\">Alignment and Interaction Research Lab<\/a>  <\/li>\n<li>Tsinghua University&#8217;s <a href=\"https:\/\/iiis.tsinghua.edu.cn\/en\/Research\/Research_Groups\/Foundations_of_Machine_Learning_Lab__FunML_Lab_.htm\">Foundations of Machine Learning Lab<\/a> (part of the the Institute for Interdisciplinary Information Sciences)<\/li>\n<li>Professor Quanshi Zhang&#8217;s <a href=\"https:\/\/sjtu-xai-lab.github.io\/\">interpretability-focused lab<\/a> at Shanghai Jiaotong University<\/li>\n<li>Professor Fu Jie&#8217;s <a href=\"https:\/\/bigaidream.github.io\/project\/auto\/\">&#8216;Autoformalization and Formally Verifiable AI&#8217; project<\/a> at Shanghai AI Lab<\/li>\n<li>Professor <a href=\"https:\/\/ravensanstete.github.io\/en\/\">Zudong Pan&#8217;s AI safety research<\/a> at Fudan University <\/li>\n<li>The Beijing Academy of Artificial Intelligence, which (alongside its technical work on AI risks) is particularly active in <a href=\"https:\/\/www.iaps.ai\/research\/china-aisi-counterparts\">promoting international cooperation on AI safety<\/a> (note this is <a href=\"https:\/\/en.wikipedia.org\/wiki\/Entity_List\">one of the Chinese organisations facing US trade restrictions<\/a>, so working here <em>might<\/em> restrict your future opportunities with projects involving US partners)<\/li>\n<\/ul>\n<p>If you&#8217;re more senior, you&#8217;ll have much more flexibility to define your own research. So, you won&#8217;t <em>necessarily<\/em> need to target labs with existing AI safety work \u2014 instead, you could join any well-resourced lab and try to drive forward a new AI safety project while you&#8217;re there.<\/p>\n<p>If you want to aim for this route, we recommend first gaining career capital by studying at elite graduate schools, publishing at top AI conferences, and <a href=\"\/career-reviews\/working-at-an-ai-lab\/\">working at major AI safety companies and labs<\/a> \u2014 regardless of location. Over time, you could try to build a Chinese network by collaborating with sympathetic Chinese AI labs and overseas Chinese professors. As in industrial labs, you could pursue field-building opportunities if you are a good fit.<\/p>\n<h3 id=\"ai-governance-researchers-at-chinese-think-tanks-and-universities\" class=\"no-toc\">AI governance researchers at Chinese think tanks and universities<\/h3>\n<p>This path can involve producing safety policy recommendations for the Chinese government or doing more public-facing research.<\/p>\n<p>This could be particularly high impact because most think tanks and university departments in China are affiliated with the government in some way. These institutions therefore have a more direct route to informing government policy than many think tanks in countries like the US and UK.<\/p>\n<p>Below are some options for think tanks and university departments doing AI-related work (as of 2026):<\/p>\n<ul>\n<li>The <a href=\"http:\/\/www.caict.ac.cn\/english\/\">China Academy of Information and Communications Technology<\/a>, which is subordinate to the Ministry of Industry and Information Technology<\/li>\n<li>The <a href=\"http:\/\/aiig.tsinghua.edu.cn\/en\/\">Institute of AI International Governance<\/a> at Tsinghua University, whose <a href=\"https:\/\/www.bsg.ox.ac.uk\/people\/xue-lan\">Dean<\/a> is Chair of the <a href=\"https:\/\/www.newamerica.org\/cybersecurity-initiative\/digichina\/blog\/translation-chinese-expert-group-offers-governance-principles-responsible-ai\/\">AI Governance Expert Committee<\/a> established by China&#8217;s Ministry of Science and Technology, and whose Honorary Dean is Fu Ying, former Vice Minister of Foreign Affairs<\/li>\n<li>The <a href=\"http:\/\/ciss.tsinghua.edu.cn\/column\/english\">Center for International Security and Strategy<\/a> at Tsinghua University, headed by Fu Ying <\/li>\n<li>The <a href=\"http:\/\/cistp.sppm.tsinghua.edu.cn\/en\/index.htm\">China Institute for Science and Technology Policy<\/a> at Tsinghua University, whose board is chaired by China&#8217;s Minister of Science and Technology<\/li>\n<li><a href=\"https:\/\/beijing.ai-safety-and-governance.institute\/\">Beijing Institute of AI Safety and Governance<\/a> led by Yi Zeng (who is a board member of the National Governance Committee of Next Generation Artificial Intelligence, established by China&#8217;s Ministry of Science and Technology)<\/li>\n<li>The <a href=\"https:\/\/www.szlhq.gov.cn\/english\/licc_20231220\/globalpartners\/content\/post_11351061.html\">Intellisia Institute<\/a>, an independent international relations think tank that produces a weekly AI Insights update <\/li>\n<li>Fudan University&#8217;s <a href=\"https:\/\/fddi.fudan.edu.cn\/_t2515\/qqrgzncxzlzx\/list.htm\">Center for Global AI Innovative Governance<\/a><\/li>\n<li>Tongji University&#8217;s <a href=\"https:\/\/aisg.tongji.edu.cn\/\">Shanghai Collaborative Innovation Center for AI Social Governance<\/a> <\/li>\n<\/ul>\n<p>These institutions have thousands of members and accept people from a range of backgrounds depending on the role. However, candidates with the best chance of securing the most impactful jobs here would have studied at top Chinese universities like Peking University, Tsinghua University, or Renmin University of China. Undergraduate degrees in computer science, AI, or electronic engineering are typically preferred over humanities. Having a postgraduate degree in public policy on top of that would make you an even stronger candidate.<\/p>\n<h3 id=\"ai-governance-researchers-at-key-long-term-focused-research-groups\" class=\"no-toc\">AI governance researchers at key long-term-focused research groups<\/h3>\n<p>This path would involve conducting research with a longer-term focus and potentially less immediate policy relevancy than at a standard think tank. This could be either:<\/p>\n<ul>\n<li>Descriptive research, such as analysing the political and economic forces that might influence the deployment of any advanced AI technology in China.<\/li>\n<li>Prescriptive research, such as making recommendations for reducing the risks of racing dynamics between Chinese and non-Chinese companies. <\/li>\n<\/ul>\n<p>This path will likely allow more emphasis on finding disinterested avenues to international coordination compared to work at national security\u2013related think tanks, which may face more pressure to respond to policymakers&#8217; desire to gain or maintain strategic advantage.<\/p>\n<p>Potential organisations to aim for include:<\/p>\n<ul>\n<li>The <a href=\"https:\/\/governance.ai\/\">Centre for the Governance of AI<\/a> (GovAI)<\/li>\n<li>The <a href=\"https:\/\/www.cser.ac.uk\/\">Centre for the Study of Existential Risk<\/a><\/li>\n<li>The <a href=\"https:\/\/longtermrisk.org\/\">Center on Long-Term Risk<\/a><\/li>\n<li>The <a href=\"https:\/\/gcrinstitute.org\/\">Global Catastrophic Risk Institute<\/a> <\/li>\n<\/ul>\n<p>We are uncertain how much capacity such organisations have for China-focused full-time staff. If you are unable to join these organisations full-time, you could consider volunteering or contracting with them on China-related projects. To add value, you would likely need to be able to propose and drive projects with little direction.<\/p>\n<h3 id=\"ai-safety-and-interdisciplinary-researchers-focusing-on-problems-of-cooperation\" class=\"no-toc\">AI safety and interdisciplinary researchers focusing on problems of cooperation<\/h3>\n<p>This path would involve conducting research into problems of cooperation, in both AI and other disciplines. According to &#8220;<a href=\"https:\/\/arxiv.org\/abs\/2012.08630\">Open problems in cooperative AI<\/a>,&#8221; an article co-authored by research scientists at DeepMind and Professor Allan Dafoe:<\/p>\n<blockquote><p>\n  Central questions include: how to build machine agents with the capabilities needed for cooperation, and how advances in machine learning can help foster cooperation in populations of agents (of machines and\/or humans), such as through improved mechanism design and mediation. This research integrates ongoing work on multi-agent systems, game theory and social choice, human-machine interaction and alignment, natural-language processing, and the construction of social tools and platforms.<\/p>\n<p>  However, Cooperative AI is not the union of these existing areas, but rather an independent bet about the productivity of specific kinds of conversations that involve these and other areas. We see opportunity to more explicitly focus on the problem of cooperation, to construct unified theory and vocabulary, and to build bridges with adjacent communities working on cooperation, including in the natural, social, and behavioural sciences.\n<\/p><\/blockquote>\n<p>Cooperative AI emerges in part from multi-agent research, a field whose importance other AI safety\u2013focused groups and researchers have highlighted. These include researchers at <a href=\"https:\/\/arxiv.org\/abs\/2006.04948\">UC Berkeley&#8217;s Center for Human-Compatible Artificial Intelligence and Universit\u00e9 de Montr\u00e9al&#8217;s Mila<\/a> and the <a href=\"https:\/\/www.lesswrong.com\/posts\/EzoCZjTdWTMgacKGS\/clr-s-recent-work-on-multi-agent-systems#Multi_agent_learning\">Center on Long-Term Risk<\/a>.<\/p>\n<p>In <a href=\"https:\/\/www.penguinrandomhouse.com\/books\/566677\/human-compatible-by-stuart-russell\/\"><em>Human Compatible: Artificial Intelligence and the Problem of Control<\/em><\/a>, Stuart Russell suggested that:<\/p>\n<blockquote><p>\n  We will need to generalize inverse reinforcement learning from the single-agent setting to the multi-agent setting\u2014that is, we will need to devise learning algorithms that work when the human and robot are part of the same environment and interacting with each other.\n<\/p><\/blockquote>\n<p>(See our <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/stuart-russell-human-compatible-ai\/\">interview with Stuart<\/a> for more.)<\/p>\n<p>We think research of this kind could help inform governance and technical proposals for improving coordination between principals and agents in different countries. While we have not explored related research in China in depth, labs that seem to be conducting relevant work include:<\/p>\n<ul>\n<li>The <a href=\"https:\/\/ai.ustc.edu.cn\/main.htm\">University of Science and Technology of China Robotics Lab<\/a><\/li>\n<li><a href=\"https:\/\/cloud.tencent.com\/developer\/article\/1146738\">Peking University<\/a><\/li>\n<li><a href=\"http:\/\/dengji-zhao.net\/smart\/index.html\">ShanghaiTech Multi-Agent Systems Research Team<\/a><\/li>\n<li><a href=\"https:\/\/www.nature.com\/articles\/d42473-019-00180-x\">Shanghai Research Institute for Intelligent Autonomous Systems at Tongji University<\/a><\/li>\n<\/ul>\n<h3 id=\"ai-focused-translators-and-publishing-advisors\" class=\"no-toc\">AI-focused translators and publishing advisors<\/h3>\n<p>This kind of role would support the translation of AI-related publications to facilitate information exchange between China and the English-speaking world. Building a common understanding of the risks and opportunities associated with advanced AI seems to be an important precursor to establishing any agreements or mechanisms for jointly mitigating risks.<\/p>\n<p><a href=\"https:\/\/chinai.substack.com\/p\/chinai-48-year-1-of-chinai\">Jeff Ding has highlighted<\/a> that while AI-related developments covered in Western outlets tend to be quickly translated for a Chinese audience, the reverse is not true. Valuable initiatives helping to reduce this imbalance include:<\/p>\n<ul>\n<li>The Center for Security and Emerging Technology&#8217;s <a href=\"https:\/\/cset.georgetown.edu\/article\/tag\/translation\/\">translations of Chinese AI documents<\/a> <\/li>\n<li>The <a href=\"https:\/\/digichina.stanford.edu\/\">DigiChina project<\/a> at Stanford University<\/li>\n<li>Jeff Ding&#8217;s <a href=\"https:\/\/chinai.substack.com\/\">ChinAI newsletter<\/a><\/li>\n<\/ul>\n<p>Alternatively, native speakers of Mandarin could consider acting as translation advisors for publishing houses and authors seeking to translate English books related to AI safety into Chinese. Examples of publications that have benefited from the support of such advisors include the Chinese versions of <a href=\"https:\/\/book.douban.com\/subject\/30262617\/\"><em>Life 3.0<\/em><\/a> and <a href=\"https:\/\/book.douban.com\/subject\/35149258\/\"><em>Human Compatible<\/em><\/a>.<\/p>\n<p>For this path, it could be hard to find full-time roles that allow for specialisation in AI safety and governance; you may also want to explore relevant work that can be performed in a contractor capacity.<\/p>\n<p>Experience in technology-related translation and\/or relevant connections would be valuable for pursuing these options.<\/p>\n<h3 id=\"ai-governance-or-policy-advisors-at-top-chinese-ai-companies\" class=\"no-toc\">AI governance or policy advisors at top Chinese AI companies<\/h3>\n<p>A role in this category would likely involve assessing and mitigating risks from a lab&#8217;s development and deployment of AI. It could involve working with other internal stakeholders to design and implement governance and accountability mechanisms to reduce risks from unsafe AI systems.<\/p>\n<p>It might also involve working with people outside the lab &#8212; for instance, advising government departments directly on AI policy, or working with other organisations to develop standards and best practices through institutions like the <a href=\"https:\/\/www.aiia-ai.org\/\">China AI Industry Alliance<\/a>.<br \/>\nIf you support your company to take a lead in AI governance mechanisms, you could potentially also have a positive influence on other Chinese AI companies.<\/p>\n<p>To find roles like these, try looking for:<\/p>\n<ul>\n<li>In-house policy research teams. For example, <a href=\"https:\/\/tisi.org\/\">Tencent<\/a> and <a href=\"https:\/\/damo.alibaba.com\/about?language=en\">Alibaba<\/a> both have their own &#8216;research institutes&#8217; which research and communicate about policy issues with stakeholders and the public \u2014 basically serving as the company&#8217;s external policy &#8216;voice&#8217;. Both Tencent and Alibaba are publicly listed, which means they&#8217;re under more pressure than private companies are to demonstrate corporate social responsibility.<\/li>\n<li>Roles where you&#8217;re responsible for internal risk management and compliance frameworks. Some AI companies (like DeepSeek) advertise compliance-related roles under names like &#8220;AGI Legal Counsel&#8221;. You&#8217;ll need legal expertise as well as some technical fluency to get one of these positions. <\/li>\n<li>At Chinese labs that don&#8217;t have policy research teams or have not yet done much work on AI governance issues, you could try to encourage more discussion of these topics from a position on teams related to government relations, industry research, legal, or strategy, or in the office of the CEO.<\/li>\n<\/ul>\n<p>It could be difficult to get staff with diverse interests and goals to take meaningful action to reduce risks from AI. This may be especially true if such action is expected to negatively impact profit. It can be hard to identify from the outside which teams and roles will genuinely have the resources and backing from leadership to drive positive change. It sometimes takes high-profile incidents to get a lab to take safety and security seriously, so having impact with this path may depend on patiently building your network and resources and acting strategically when the right moment arises.<\/p>\n<p>We&#8217;d also encourage you to seriously consider the potential downsides of taking this path, some of which we talked about <a href=\"\/#arguments-against\">earlier<\/a>. In particular, it&#8217;s possible that some of these roles could appear outwardly beneficial for the world while mostly serving to boost your company&#8217;s success \u2014 which could actually undermine safety and governance efforts. We&#8217;ve written more about <a href=\"https:\/\/80000hours.org\/career-reviews\/working-at-an-ai-lab\/\">whether you should take a job at an AI company (and if so, what kind of job) here<\/a>.<\/p>\n<h3 id=\"ai-governance-or-policy-advisors-at-top-ai-companies-in-the-us-and-uk\" class=\"no-toc\">AI governance or policy advisors at top AI companies in the US and UK<\/h3>\n<p>Roles in this category might involve risk mitigation and stakeholder engagement tasks similar to those set out in the section above, but with more focus on trying to build understanding of and communication with Chinese actors. While it might be rare for top US and UK AI labs to advertise AI governance and policy roles that concentrate on China, you might be able to create opportunities if you have a China-related background. For instance, you could start knowledge-sharing and research collaborations with Chinese labs or academics in the area of AI safety, or advise internally on how initiatives and announcements might be perceived by Chinese stakeholders.<\/p>\n<p>Some of the leading labs working on advanced AI (such as <a href=\"https:\/\/openai.com\/\">OpenAI<\/a>, <a href=\"https:\/\/deepmind.com\/\">DeepMind<\/a>, <a href=\"https:\/\/ai.facebook.com\/\">Facebook AI Research<\/a>, and <a href=\"https:\/\/research.google\/teams\/brain\/\">Google Brain<\/a>) are currently based in the US and the UK. What they do matters not only for the systems they are building, but also for shaping the perceptions and behaviours of other AI labs around the world (including in China), where they are well known and well regarded. This makes working with these Western labs high leverage. (Read more about <a href=\"\/career-reviews\/working-at-an-ai-lab\/\">whether to work in a leading AI lab<\/a>.)<\/p>\n<p>Aside from policy roles in these labs, there could be other opportunities to engage with China in Western labs with a role in AI development. You could work in government relations for Microsoft or Google in Beijing, or work in a role in a Western company that involves international engagement. For instance, <a href=\"https:\/\/blogs.microsoft.com\/eupolicy\/2020\/01\/17\/senior-gov-affairs-leaders-appointed-brussels-new-york\/\">Microsoft has a representation office<\/a> to the United Nations, and several Western companies are represented on the <a href=\"https:\/\/venturebeat.com\/2019\/05\/29\/world-economic-forum-launches-global-ai-council-to-address-governance-gaps\/\">World Economic Forum&#8217;s AI Council<\/a>.<\/p>\n<p>We don&#8217;t know how many roles involve the opportunity to work on or engage with China, at least in the short term. Some Western companies may be cautious about operating in or engaging with China, for fear of negative media or political reaction.<\/p>\n<h3 id=\"china-ai-analysts-at-top-think-tanks-in-the-us-and-the-uk\" class=\"no-toc\">China AI analysts at top think tanks in the US and the UK<\/h3>\n<p>This path would likely involve researching AI developments in China, making recommendations to policymakers in the US and UK, and communicating the outputs of research in written reports and oral briefings.<\/p>\n<p>This could provide a route to impact not only through directly influencing government decisions, but also through shaping the wider public conversation on AI and China, as think tank researchers are often cited in media reports. In addition, think tanks may lead or participate in <a href=\"https:\/\/www.usip.org\/publications\/2019\/07\/primer-multi-track-diplomacy-how-does-it-work\">track-two (unofficial) dialogues<\/a> with counterparts in other countries, which can help build trust and communication, and foster mutual understanding.<\/p>\n<p>Think tanks doing relevant work include:<\/p>\n<ul>\n<li>The <a href=\"https:\/\/carnegieendowment.org\/\">Carnegie Endowment for International Peace<\/a><\/li>\n<li>The <a href=\"https:\/\/cset.georgetown.edu\/\">Center for Security and Emerging Technology<\/a> (CSET)<\/li>\n<li>The <a href=\"https:\/\/www.cnas.org\/\">Center for a New American Security<\/a> (CNAS)<\/li>\n<li>The <a href=\"https:\/\/governance.ai\/\">Centre for the Governance of AI<\/a> (GovAI)<\/li>\n<li>The <a href=\"https:\/\/www.brookings.edu\/\">Brookings Institution<\/a><\/li>\n<\/ul>\n<p>Experts from CSET, CNAS, and GovAI have <a href=\"https:\/\/www.uscc.gov\/hearings\/technology-trade-and-military-civil-fusion-chinas-pursuit-artificial-intelligence-new\">testified<\/a> before the U.S.-China Economic and Security Review Commission.<\/p>\n<p>Mandarin language skills and an academic background in international relations, China studies, and\/or security studies would be helpful for pursuing this path.<\/p>\n<p><strong>Note on citizenship requirements:<\/strong> Some US think tank jobs related to national security <a href=\"https:\/\/www.wtplaw.com\/news-events\/security-clearances-who-will-need-them-and-how-to-get-one\">require a security clearance<\/a>. US citizenship is required to gain security clearance, and <a href=\"https:\/\/news.clearancejobs.com\/2014\/01\/23\/former-chinese-citizens-and-u-s-security-clearances-doha-hearings\/\">significant ties with China can sometimes disbar even US citizens from obtaining such clearance<\/a>. However, we expect that only a small fraction of roles at the think tanks mentioned above would require clearances. We are unsure about whether there are security clearance requirements for any roles in UK think tanks.<\/p>\n<h2><span id=\"alternative-routes-that-we-havent-investigated\" class=\"toc-anchor\"><\/span>Alternative routes that we haven&#8217;t investigated<\/h2>\n<p>In the policy sphere, it could be impactful to advise on China and technology policy within the US, UK, or EU governments. While we have written a <a href=\"https:\/\/80000hours.org\/articles\/us-ai-policy\/\">career profile on US AI policy<\/a>, we have not investigated specific China-focused roles within the governments of the US or other countries. Nor have we investigated the path of working on science and technology policy within the Chinese government, though this could be worth exploring if you are a Chinese citizen.<\/p>\n<p>Advising parts of international organisations focused on AI, such as the <a href=\"https:\/\/www.un.org\/en\/sg-digital-cooperation-panel\">UN Secretary General&#8217;s High-level Panel on Digital Cooperation<\/a> or the <a href=\"https:\/\/oecd.ai\/en\/\">OECD&#8217;s AI Policy Observatory<\/a>, could also provide opportunities for impact.<\/p>\n<p>In industry, it could be worth exploring opportunities in Chinese semiconductor or cloud computing companies. This is based on <a href=\"https:\/\/80000hours.org\/career-reviews\/become-an-expert-in-ai-hardware\/\">our view<\/a> that building expertise in AI hardware is potentially high impact.<\/p>\n<p>There may also be relevant positions in the philanthropic sector. For instance, <a href=\"https:\/\/schmidtfutures.com\/our-method\/careers\/researcher-plaintext-group\/\">Schmidt Futures has advertised research roles<\/a> that address technology relations with China.<\/p>\n<p>Finally, it could be valuable to work on improving great power relations more broadly, for instance through academic groups and foundations focused on China\u2013US relations, journalism, or international relations research.<\/p>\n<p>We have not investigated any of these paths in detail.<\/p>\n<h2><span id=\"personal-fit\" class=\"toc-anchor\"><\/span>Personal fit<\/h2>\n<p>In other guides, we set out personal fit considerations for <a href=\"https:\/\/80000hours.org\/career-reviews\/artificial-intelligence-risk-research\/#who-should-consider-ai-safety-research\">technical AI safety research<\/a>, <a href=\"https:\/\/80000hours.org\/articles\/us-ai-policy\/#fit-who-is-especially-well-placed-to-pursue-this-career\">US AI public policy careers<\/a>, and <a href=\"https:\/\/80000hours.org\/career-reviews\/think-tank-research\/#personal-fit\">think tank research<\/a>, many of which apply to the paths discussed here.<\/p>\n<p>Rather than repeating ourselves, below we highlight some skills and traits that seem particularly relevant to China-related AI safety and governance paths.<\/p>\n<h3><span id=\"bilingualism-andor-cross-cultural-communication-skills\" class=\"toc-anchor\"><\/span>Bilingualism and\/or cross-cultural communication skills<\/h3>\n<p>People with strong Mandarin and English will have a greater chance of success in this area. Proficiency in French or German could also open up opportunities to support coordination between China and the EU.<\/p>\n<p>Beyond bilingualism, the ability to communicate and empathise across cultures is important for any role involving international coordination. Experience living abroad and\/or working in teams of people with highly diverse backgrounds may be useful in this respect.<\/p>\n<p>It may be particularly challenging to maintain cross-cultural empathy if you are surrounded by colleagues with a non-empathetic view of people from the other country. In such situations, it seems important to:<\/p>\n<ul>\n<li>Be able to resist influence by osmosis from your social surroundings <\/li>\n<li>Actively work to <a href=\"https:\/\/greatergood.berkeley.edu\/topic\/empathy\/definition#how-cultivate-empathy\">cultivate<\/a> and maintain cognitive empathy<\/li>\n<\/ul>\n<h3><span id=\"strong-networking-abilities-especially-for-policy-roles\" class=\"toc-anchor\"><\/span>Strong networking abilities (especially for policy roles)<\/h3>\n<p>Networking is important to career advancement in many cultures, and success in policy and strategy in particular require the ability to build and make good use of connections.<\/p>\n<p>We have some practical advice on <a href=\"https:\/\/80000hours.org\/career-guide\/how-to-be-successful\/#6-surround-yourself-with-great-people\">building and maintaining a strong network of relationships with others here<\/a> and also <a href=\"https:\/\/80000hours.org\/2015\/03\/how-to-network\/\">here<\/a>.<\/p>\n<h3><span id=\"good-judgement-and-prudence\" class=\"toc-anchor\"><\/span>Good judgement and prudence<\/h3>\n<p><a href=\"https:\/\/80000hours.org\/2020\/09\/good-judgement\/\">Good judgement<\/a> is important to pursuing all of our <a href=\"https:\/\/80000hours.org\/articles\/high-impact-careers\/\">recommended career paths<\/a>, but we are including it here to really emphasise its importance.<\/p>\n<p>Many of the paths described above involve challenges like working with people who might have their own agendas separate from AI safety, dealing with incentives that might point you toward less of a focus on safety, and navigating delicate and high-stakes relationships and situations.<\/p>\n<p>The risk of <a href=\"https:\/\/www.nickbostrom.com\/information-hazards.pdf\">information hazards<\/a> also demonstrates the need for good judgement and prudence. <a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/full\/10.1111\/1758-5899.12403\">Openness in AI development could contribute to harmful applications of the technology<\/a> and make racing dynamics more closely competitive. It is important to recognise when information could be hazardous, to judge impartially the risks and benefits of wider disclosure, and to practise caution when making decisions.<\/p>\n<p>All this makes good judgement &#8212; the ability to think through complex considerations and make measured decisions &#8212; particularly crucial.<\/p>\n<p>Keep in mind too that good judgement often means knowing when to find someone else with good judgement to listen to, and <a href=\"https:\/\/web.archive.org\/web\/20200305193355\/https:\/\/nickbostrom.com\/papers\/unilateralist.pdf\">avoiding making decisions unilaterally<\/a>. (See our <a href=\"https:\/\/80000hours.org\/2020\/09\/good-judgement\/\">separate piece on good judgement<\/a> for more.)<\/p>\n<h2><span id=\"learn-more\" class=\"toc-anchor\"><\/span>Learn more<\/h2>\n<p>If you are thinking about pursuing a career working on this problem, but would like to build more general understanding and career capital first, you could consider:<\/p>\n<ul>\n<li>Learning more about AI, and in particular <a href=\"https:\/\/80000hours.org\/articles\/ai-safety-syllabus\/\">AI safety<\/a>. The <a href=\"https:\/\/humancompatible.ai\/bibliography\">bibliography<\/a> from the Center for Human-Compatible AI at UC Berkeley provides some helpful reading recommendations. <\/li>\n<li>Learn more about developing expertise in China if you&#8217;re from the West \u2014 see our article on <a href=\"https:\/\/80000hours.org\/career-reviews\/china-specialist\/\">Improving China\u2013Western coordination on global catastrophic risks<\/a>.<\/li>\n<li>Developing expertise, experience, and connections for working on AI governance in general. For instance, you could:\n<ul>\n<li>Read our <a href=\"https:\/\/80000hours.org\/articles\/ai-policy-guide\/\">AI policy and strategy guide<\/a>.<\/li>\n<li>Apply for funding for relevant study and training, e.g. from <a href=\"https:\/\/www.openphilanthropy.org\/focus\/global-catastrophic-risks\/potential-risks-advanced-artificial-intelligence\/funding-AI-policy-careers\">Coefficient Giving<\/a> or <a href=\"https:\/\/longtermrisk.org\/grantmaking\/\">Center on Long-Term Risk<\/a>.<\/li>\n<li>Apply to fellowship and internship programmes at relevant organisations, e.g. the <a href=\"https:\/\/governance.ai\/\">Centre for the Governance of AI<\/a>.<\/li>\n<li>Attend relevant conferences and get in touch with people in the field.<\/li>\n<\/ul>\n<\/li>\n<li>Developing expertise, experience, and connections that could be useful for <a href=\"https:\/\/80000hours.org\/articles\/china-careers\/\">improving China\u2013Western coordination on global catastrophic risks<\/a> in general. For instance, you could:\n<ul>\n<li>Apply for scholarships from Peking University&#8217;s <a href=\"https:\/\/yenchingacademy.pku.edu.cn\/\">Yenching Academy<\/a> and\/or Tsinghua University&#8217;s <a href=\"https:\/\/80000hours.org\/2017\/06\/the-schwarzman-scholarship-an-exciting-opportunity-to-learn-more-about-china-and-get-a-masters-in-global-affairs\/\">Schwarzman College<\/a>. <\/li>\n<li>Improve your bilingual or translation skills.<\/li>\n<li>Work at a policy or political risk consultancy in China.<\/li>\n<li>Get involved in the <a href=\"https:\/\/80000hours.org\/community\/\">effective altruism community<\/a>.<\/li>\n<\/ul>\n<\/li>\n<li>Trying out a short-term research project, e.g. on one of the questions that <a href=\"https:\/\/80000hours.org\/articles\/research-questions-by-discipline\/#china-studies\">we think could have a large social impact<\/a>, or discussed in the CSET report on <a href=\"https:\/\/cset.georgetown.edu\/publication\/ai-safety-security-and-stability-among-great-powers-options-challenges-and-lessons-learned-for-pragmatic-engagement\/\">AI Safety, Security, and Stability Among Great Powers<\/a>.<\/li>\n<li>Checking our <a href=\"https:\/\/80000hours.org\/job-board\/ai-safety-policy\/\">job board<\/a> for the latest opportunities.<\/li>\n<li>Applying to <a href=\"\/speak-with-us\/?int_campaign=career-review\">speak with us one-on-one<\/a> about your plans.<\/li>\n<\/ul>\n<h2><span id=\"additional-resources\" class=\"toc-anchor\"><\/span>Additional resources<\/h2>\n<h3><span id=\"top-recommendations\" class=\"toc-anchor\"><\/span>Top recommendations<\/h3>\n<ul>\n<li><a href=\"https:\/\/arxiv.org\/abs\/1907.04534\">The Role of Cooperation in Responsible AI Development<\/a> by Amanda Askell, Miles Brundage, and Gillian Hadfield <\/li>\n<li><a href=\"https:\/\/cset.georgetown.edu\/wp-content\/uploads\/CSET-AI-Safety-Security-and-Stability-Among-the-Great-Powers.pdf\">AI Safety, Security, and Stability Among Great Powers<\/a> by Andrew Imbrie and Elsa B. Kania <\/li>\n<li><a href=\"https:\/\/emergingtechpolicy.org\/areas\/china-policy\/?utm_source=80000hours.org&amp;utm_medium=career-reviews-china-related-ai-safety-and-governance-paths\">China and tech policy resources, think tanks, fellowships, and master&#8217;s programs<\/a><\/li>\n<li>For further recommendations, see <a href=\"https:\/\/www.fhi.ox.ac.uk\/wp-content\/uploads\/China-AI-Syllabus-1.pdf\">Syllabus: Artificial Intelligence and China<\/a> by Jeffrey Ding, Sophie-Charlotte Fischer, Brian Tse, and Chris Byrd <\/li>\n<\/ul>\n<h3><span id=\"further-recommendations\" class=\"toc-anchor\"><\/span>Further recommendations<\/h3>\n<ul>\n<li>Podcast: <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/helen-toner-ai-policy-washington-dc\/\">Helen Toner on the geopolitics of AI in China and the Middle East<\/a><\/li>\n<li>Podcast: <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/sihao-huang-china-ai-capabilities\/\">Sihao Huang on the risk that US\u2013China AI competition leads to war<\/a><\/li>\n<li>Podcast: <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/jeffrey-ding-china-ai-dream\/\">Jeff Ding on China, its AI dream, and what we get wrong about both<\/a><\/li>\n<li>Podcast: <a href=\"https:\/\/80000hours.org\/podcast\/episodes\/daniel-kokotajlo-ai-2027-updates-china-robot-economy\/\">Daniel Kokotajlo on what a hyperspeed robot economy might look like<\/a><\/li>\n<li><a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/full\/10.1111\/1758-5899.12403\">Strategic Implications of Openness in AI Development<\/a> by Nick Bostrom <\/li>\n<li><a href=\"https:\/\/www.cnas.org\/publications\/reports\/technology-roulette\">Technology Roulette<\/a> by Richard Danzig<\/li>\n<li><a href=\"http:\/\/ciss.tsinghua.edu.cn\/info\/eyjdt\/1368\">Some Thoughts and Analyses on How AI Will Impact International Relations<\/a> by Fu Ying <\/li>\n<li><a href=\"https:\/\/www.cnas.org\/publications\/reports\/ai-and-international-stability-risks-and-confidence-building-measures\">AI and International Stability: Risks and Confidence-Building Measures<\/a> by Michael Horowitz and Paul Scharre <\/li>\n<li><a href=\"https:\/\/book.douban.com\/subject\/35149258\/\">AI\u65b0\u751f (\u7834\u89e3\u4eba\u673a\u5171\u5b58\u5bc6\u7801\u4eba\u7c7b\u4e00\u4e2a\u5927\u95ee\u9898)<\/a> &#8212; Chinese translation of Stuart Russell&#8217;s <em>Human Compatible<\/em> <\/li>\n<li>Stuart Russell and Zhang Hongjiang in dialogue at the 2021 conference of the Beijing Academy of AI (Mandarin), <a href=\"https:\/\/hub.baai.ac.cn\/view\/8754\">\u5c16\u5cf0\u5bf9\u8bdd\uff08\u5f20\u5b8f\u6c5f | \u667a\u6e90\u7814\u7a76\u9662\u7406\u4e8b\u957f\uff0cStuart Russell | \u52a0\u5dde\u5927\u5b66\u4f2f\u514b\u5229\u5206\u6821\u6559\u6388\uff09<\/a><\/li>\n<\/ul>\n<div class=\"tw--mt-6 tw--p-3 tw--pt-2 tw--bg-gray-lighter tw--rounded-md \">\n<h3 class=\"no-toc\">\t\t<a class=\"no-visited-styling tw--text-off-black hover:tw--text-off-black hover:tw--no-underline focus:tw--text-off-black\" href=\"https:\/\/80000hours.org\/career-reviews\/\">\t\t\t<small>Read next:&nbsp;<\/small>\t\t\tLearn about other high-impact careers\t\t<\/a>\t<\/h3>\n<div class=\"tw--grid xs:tw--grid-flow-col tw--gap-3\">\n<div class=\"xs:tw--order-last tw--pt-1\">\t\t\t<a href=\"https:\/\/80000hours.org\/career-reviews\/\">\t\t\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/80000hours.org\/wp-content\/uploads\/2023\/11\/HomepageB-3-720x448.jpg\" alt=\"Decorative post preview\" width=\"720\" height=\"448\">\t\t\t<\/a>\t\t<\/div>\n<div class=\"\">\n<div class=\"tw--pb-3\">\n<p>Want to consider more paths? See our list of the highest-impact career paths according to our research.<\/p>\n<\/div>\n<div class=\"\">\t\t\t\t<a href=\"https:\/\/80000hours.org\/career-reviews\/\" class=\"btn btn-primary\">Continue &rarr;<\/a>\t\t\t<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"well visible-if-not-newsletter-subscriber margin-bottom margin-top padding-top-small padding-bottom-small\">\n<h3 class=\"no-toc\">Plus, join our newsletter and we&#8217;ll mail you a free book<\/h3>\n<p>Join our newsletter and we&#8217;ll send you a free copy of <em>The Precipice<\/em> \u2014 a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. <a href=\"https:\/\/80000hours.org\/free-book\/#giveaway-terms\">T&#038;Cs here<\/a>.<\/p>\n<form data-80k-object-id=\"\" data-80k-form-action=\"newsletter__subscribe\" action=\"\/\" method=\"post\" class=\"form-newsletter-signup form-newsletter-signup-step-1 margin-bottom-smaller\">\n<div class=\"mc-field-group input-group compact-input-group \"> <input type=\"email\" value=\"\" name=\"email\" required class=\"form-control email\" placeholder=\"Email address\" id=\"input_email\" pattern=\"[a-zA-Z0-9._+\\-]+@(?!(gmial\\.com|gnail\\.com|gmai\\.com|gmal\\.com|gmali\\.com|gamil\\.com|gmail\\.co|gmail\\.con|gmail\\.om|yahooo\\.com|yaho\\.com|outlok\\.com|outloo\\.com|hotmial\\.com|hotmail\\.con|hmail\\.com|yopmail\\.com|discardmail\\.com)$)[a-zA-Z0-9.\\-]+\\.[a-zA-Z]{2,}\" title=\"Please enter a valid email address (e.g. user@example.com)\" > <span class=\"submit input-group-btn input-group-btn-right\"> <input type=\"submit\" id=\"mc-embedded-subscribe\" value=\"GET THE BOOK\" class=\"btn btn-primary \" \/> <\/span> <\/div>\n<div> <input name=\"_eightyk_action\" value=\"mailchimp_add_subscriber\" type=\"hidden\"> <input name=\"redirect_path_after_step_2\" value=\"\/newsletter\/welcome\/\" type=\"hidden\"> <\/div>\n<div style=\"position: absolute; left: -5000px;\"> <input type=\"text\" name=\"b_abc12f58bbe8075560abdc5b7_43bc1ae55c\" tabindex=\"-1\" value=\"\"> <\/div>\n<\/form>\n<\/div>\n","protected":false},"author":422,"featured_media":76231,"parent":0,"menu_order":0,"template":"template-standard-article.php","meta":{"_acf_changed":false,"footnotes":"[fn 1] According to [most recent (2019) World Bank figures](https:\/\/data.worldbank.org\/indicator\/NY.GDP.MKTP.CD?most_recent_value_desc=true)[\/fn]\r\n\r\n[fn 2] [CSET Issue Brief: Chinese Public AI R&D Spending: Provisional Findings](https:\/\/cset.georgetown.edu\/wp-content\/uploads\/Chinese-Public-AI-RD-Spending-Provisional-Findings-1.pdf)[\/fn]\r\n\r\n[fn 3] See [AI Governance: Opportunity and Theory of Impact](https:\/\/forum.effectivealtruism.org\/posts\/42reWndoTEhFqu6T8\/ai-governance-opportunity-and-theory-of-impact) on the EA Forum for further discussion of risks.[\/fn]\r\n\r\n[fn 4] For instance, a [report](https:\/\/www.nscai.gov\/2021-final-report\/) by the US National Security Commission on AI published in March 2021 asserts that \"whoever translates AI developments into applications first will have the advantage\" (p. 28). China's 2017 [New Generation Artificial Intelligence Development Plan](https:\/\/www.newamerica.org\/cybersecurity-initiative\/digichina\/blog\/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017\/) said that China must \"firmly seize the strategic initiative in the new stage of international competition in AI development.\" However, it is possible that AI companies may not strongly value being the first to develop a particular AI system; entering a market after the front-runner can have advantages such as free-riding on the front-runner's R&D or benefiting from more information about the relevant market.[\/fn]\r\n\r\n[fn 5] See for instance, [AI and International Stability: Risks and Confidence-Building Measures](https:\/\/www.cnas.org\/publications\/reports\/ai-and-international-stability-risks-and-confidence-building-measures) by Michael Horowitz and Paul Scharre. However, while there are first-mover advantages in many industries (for example, pharmaceuticals), they don't always lead to a race to the bottom on safety, because of mechanisms such as market forces, regulation, and legal liability.[\/fn]\r\n\r\n[fn 6] There is debate among scholars over the relative importance of ideological differences in China-US tensions. See \"[The emerging ideological security dilemma between China\r\nand the U.S.](https:\/\/www.sis.pku.edu.cn\/teachers\/docs\/20210206105756708573.pdf)\" by Dalei Jie at Peking University, and the blog post \"[Are the United States and China in an Ideological Competition?](https:\/\/www.csis.org\/blogs\/freeman-chair-blog\/are-united-states-and-china-ideological-competition)\" from the Center for Strategic and International Studies.[\/fn]\r\n\r\n[fn 7] See page 43 of [China AI-Brain Research: Brain-Inspired AI, Connectomics, Brain-Computer Interfaces](https:\/\/cset.georgetown.edu\/research\/china-ai-brain-research\/) from the Center for Security and Emerging Technology.[\/fn]\r\n\r\n[fn 8] This is based on the number of people working on China\u2013Western coordination at prominent AI governance organisations such as [GovAI](https:\/\/www.governance.ai\/people) and the [Partnership on AI](https:\/\/partnershiponai.org\/team\/). As of October 2021, our count suggests three out of 60 staff members (including affiliates and research fellows) at these two organisations are working on China-related topics.[\/fn] \r\n\r\n[fn 9] To illustrate this, it seems plausible that a smart, motivated person would have struggled to make the Industrial Revolution go better or worse from a long-term perspective.[\/fn]\r\n\r\n[fn 10] A dual-use technology is one that can be simultaneously applied in both commercial and defence domains. The term can also be used more generally to describe technology that is both beneficial and harmful, depending on how it is used. (See page 39 of Jade Leung's 2019 PhD thesis, *[Who will govern artificial intelligence? Learning from the history of strategic politics in emerging technologies](https:\/\/ora.ox.ac.uk\/objects\/uuid:ea3c7cb8-2464-45f1-a47c-c7b568f27665)*.) Both these interpretations can be seen as applicable to AI.[\/fn]\r\n\r\n[fn 11] See for instance, [\u51fa\u56fd\u7559\u5b66\u7684\u4eba\u8fd8\u80fd\u8003\u516c\u52a1\u5458\u5417?](https:\/\/www.zhihu.com\/question\/22004303) (in Mandarin).[\/fn]\r\n\r\n[fn 12] An example of someone who reached an influential position in the US government after spending time in China is [Matthew Pottinger](https:\/\/www.wsj.com\/articles\/SB113461636659623128). Pottinger, who was Deputy National Security Adviser from September 2019 to January 2021, was a journalist in Beijing with *The Wall Street Journal* and Reuters from 1998 to 2005.[\/fn]\r\n\r\n[fn 13] For the Chinese Foreign Ministry Spokesperson's remarks on the sanctions, see [here](https:\/\/www.fmprc.gov.cn\/mfa_eng\/xwfw_665399\/s2510_665401\/t1863106.shtml). Criticism of the escalation from the Center for Strategic & International Studies can be found [here](https:\/\/www.csis.org\/analysis\/we-stand-merics).[\/fn]\r\n\r\n[fn 14] Some other ideas that may be promising but that we have not had a chance to investigate in detail are treated in the following section.[\/fn]\r\n\r\n[fn 15] See examples for [Baidu](https:\/\/arxiv.org\/abs\/1907.05418), [Huawei](https:\/\/arxiv.org\/pdf\/1908.08705.pdf), and [Tencent](https:\/\/keenlab.tencent.com\/en\/2019\/03\/29\/Tencent-Keen-Security-Lab-Experimental-Security-Research-of-Tesla-Autopilot\/).[\/fn]\r\n\r\n[fn 16] Foreign citizens should note that they will generally need two years of relevant work experience to get a work visa in China (though this requirement is [reduced to one year](https:\/\/www.china-briefing.com\/news\/chinas-z-visa-masters-graduates\/) for master's graduates from Chinese universities and well-known overseas universities).[\/fn]\r\n\r\n[fn 17] For instance, [BAAI has advertised positions](https:\/\/web.archive.org\/web\/20211015062425\/http:\/\/job.baai.ac.cn\/req\/details\/?page=social&id=205) for industry researchers covering technical trends in AI, including developments at top Western labs. To the extent that Western labs are producing quality technical safety research, industry researchers in Chinese labs could have the opportunity to showcase such research and encourage its application in their own organisations.[\/fn]"},"categories":[1353,315,1386,1421,1403,1369,291],"class_list":["post-74634","career_profile","type-career_profile","status-publish","has-post-thumbnail","hentry","category-ai","category-career-advice-strategy","category-career-paths","category-china-related-ai-safety-governance-career-paths","category-china-western-coordination-career-paths","category-machine-learning-skill-building-and-career-capital","category-world-problems"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>China-related AI safety and governance paths | Career review | 80,000 Hours<\/title>\n<meta name=\"description\" content=\"China is a leading country in AI. If you have a background in China you may be able to help ensure its development benefits everyone.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/80000hours.org\/career-reviews\/china-related-ai-safety-and-governance-paths\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"China-related AI safety and governance paths\" \/>\n<meta property=\"og:description\" content=\"Do you have a background in China? If so, you could have the opportunity to help solve one of the world\u2019s biggest problems.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/80000hours.org\/career-reviews\/china-related-ai-safety-and-governance-paths\/\" \/>\n<meta property=\"og:site_name\" content=\"80,000 Hours\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/80000Hours\" \/>\n<meta property=\"article:modified_time\" content=\"2026-04-07T14:08:07+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/02\/juniperphoton-tpYURH-D-I8-unsplash-scaled.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1708\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:title\" content=\"China-related AI safety and governance paths\" \/>\n<meta name=\"twitter:description\" content=\"Do you have a background in China? If so, you could have the opportunity to help solve one of the world\u2019s biggest problems.\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/03\/juniperphoton-tpYURH-D-I8-unsplash-720x448-1.jpeg\" \/>\n<meta name=\"twitter:site\" content=\"@80000hours\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"36 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/career-reviews\\\/china-related-ai-safety-and-governance-paths\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/career-reviews\\\/china-related-ai-safety-and-governance-paths\\\/\"},\"author\":{\"name\":\"Hide author\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/person\\\/5c6b8b7fad8d98e829f103ac11077e8c\"},\"headline\":\"China-related AI safety and governance&nbsp;paths\",\"datePublished\":\"2022-02-10T16:35:21+00:00\",\"dateModified\":\"2026-04-07T14:08:07+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/career-reviews\\\/china-related-ai-safety-and-governance-paths\\\/\"},\"wordCount\":7365,\"publisher\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/career-reviews\\\/china-related-ai-safety-and-governance-paths\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2022\\\/02\\\/juniperphoton-tpYURH-D-I8-unsplash-scaled.jpg\",\"articleSection\":[\"AI\",\"Career advice &amp; strategy\",\"Career paths\",\"China-related AI safety &amp; governance\",\"China-Western coordination\",\"Machine learning\",\"World problems\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/career-reviews\\\/china-related-ai-safety-and-governance-paths\\\/\",\"url\":\"https:\\\/\\\/80000hours.org\\\/career-reviews\\\/china-related-ai-safety-and-governance-paths\\\/\",\"name\":\"China-related AI safety and governance paths | Career review | 80,000 Hours\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/career-reviews\\\/china-related-ai-safety-and-governance-paths\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/career-reviews\\\/china-related-ai-safety-and-governance-paths\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2022\\\/02\\\/juniperphoton-tpYURH-D-I8-unsplash-scaled.jpg\",\"datePublished\":\"2022-02-10T16:35:21+00:00\",\"dateModified\":\"2026-04-07T14:08:07+00:00\",\"description\":\"China is a leading country in AI. If you have a background in China you may be able to help ensure its development benefits everyone.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/career-reviews\\\/china-related-ai-safety-and-governance-paths\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/80000hours.org\\\/career-reviews\\\/china-related-ai-safety-and-governance-paths\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/career-reviews\\\/china-related-ai-safety-and-governance-paths\\\/#primaryimage\",\"url\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2022\\\/02\\\/juniperphoton-tpYURH-D-I8-unsplash-scaled.jpg\",\"contentUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2022\\\/02\\\/juniperphoton-tpYURH-D-I8-unsplash-scaled.jpg\",\"width\":2560,\"height\":1708},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/career-reviews\\\/china-related-ai-safety-and-governance-paths\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/80000hours.org\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"China-related AI safety and governance&nbsp;paths\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#website\",\"url\":\"https:\\\/\\\/80000hours.org\\\/\",\"name\":\"80,000 Hours\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/80000hours.org\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#organization\",\"name\":\"80,000 Hours\",\"url\":\"https:\\\/\\\/80000hours.org\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2018\\\/07\\\/og-logo_0.png\",\"contentUrl\":\"https:\\\/\\\/80000hours.org\\\/wp-content\\\/uploads\\\/2018\\\/07\\\/og-logo_0.png\",\"width\":1500,\"height\":785,\"caption\":\"80,000 Hours\"},\"image\":{\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/80000Hours\",\"https:\\\/\\\/x.com\\\/80000hours\",\"https:\\\/\\\/www.youtube.com\\\/user\\\/eightythousandhours\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/80000hours.org\\\/#\\\/schema\\\/person\\\/5c6b8b7fad8d98e829f103ac11077e8c\",\"name\":\"Hide author\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/4ae4446f148f866dbfb4bca76c5006226617665555491aca2e6b84ee807b6bb8?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/4ae4446f148f866dbfb4bca76c5006226617665555491aca2e6b84ee807b6bb8?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/4ae4446f148f866dbfb4bca76c5006226617665555491aca2e6b84ee807b6bb8?s=96&d=mm&r=g\",\"caption\":\"Hide author\"},\"url\":\"https:\\\/\\\/80000hours.org\\\/author\\\/guest-author\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"China-related AI safety and governance paths | Career review | 80,000 Hours","description":"China is a leading country in AI. If you have a background in China you may be able to help ensure its development benefits everyone.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/80000hours.org\/career-reviews\/china-related-ai-safety-and-governance-paths\/","og_locale":"en_US","og_type":"article","og_title":"China-related AI safety and governance paths","og_description":"Do you have a background in China? If so, you could have the opportunity to help solve one of the world\u2019s biggest problems.","og_url":"https:\/\/80000hours.org\/career-reviews\/china-related-ai-safety-and-governance-paths\/","og_site_name":"80,000 Hours","article_publisher":"https:\/\/www.facebook.com\/80000Hours","article_modified_time":"2026-04-07T14:08:07+00:00","og_image":[{"width":2560,"height":1708,"url":"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/02\/juniperphoton-tpYURH-D-I8-unsplash-scaled.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_title":"China-related AI safety and governance paths","twitter_description":"Do you have a background in China? If so, you could have the opportunity to help solve one of the world\u2019s biggest problems.","twitter_image":"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/03\/juniperphoton-tpYURH-D-I8-unsplash-720x448-1.jpeg","twitter_site":"@80000hours","twitter_misc":{"Est. reading time":"36 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/80000hours.org\/career-reviews\/china-related-ai-safety-and-governance-paths\/#article","isPartOf":{"@id":"https:\/\/80000hours.org\/career-reviews\/china-related-ai-safety-and-governance-paths\/"},"author":{"name":"Hide author","@id":"https:\/\/80000hours.org\/#\/schema\/person\/5c6b8b7fad8d98e829f103ac11077e8c"},"headline":"China-related AI safety and governance&nbsp;paths","datePublished":"2022-02-10T16:35:21+00:00","dateModified":"2026-04-07T14:08:07+00:00","mainEntityOfPage":{"@id":"https:\/\/80000hours.org\/career-reviews\/china-related-ai-safety-and-governance-paths\/"},"wordCount":7365,"publisher":{"@id":"https:\/\/80000hours.org\/#organization"},"image":{"@id":"https:\/\/80000hours.org\/career-reviews\/china-related-ai-safety-and-governance-paths\/#primaryimage"},"thumbnailUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/02\/juniperphoton-tpYURH-D-I8-unsplash-scaled.jpg","articleSection":["AI","Career advice &amp; strategy","Career paths","China-related AI safety &amp; governance","China-Western coordination","Machine learning","World problems"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/80000hours.org\/career-reviews\/china-related-ai-safety-and-governance-paths\/","url":"https:\/\/80000hours.org\/career-reviews\/china-related-ai-safety-and-governance-paths\/","name":"China-related AI safety and governance paths | Career review | 80,000 Hours","isPartOf":{"@id":"https:\/\/80000hours.org\/#website"},"primaryImageOfPage":{"@id":"https:\/\/80000hours.org\/career-reviews\/china-related-ai-safety-and-governance-paths\/#primaryimage"},"image":{"@id":"https:\/\/80000hours.org\/career-reviews\/china-related-ai-safety-and-governance-paths\/#primaryimage"},"thumbnailUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/02\/juniperphoton-tpYURH-D-I8-unsplash-scaled.jpg","datePublished":"2022-02-10T16:35:21+00:00","dateModified":"2026-04-07T14:08:07+00:00","description":"China is a leading country in AI. If you have a background in China you may be able to help ensure its development benefits everyone.","breadcrumb":{"@id":"https:\/\/80000hours.org\/career-reviews\/china-related-ai-safety-and-governance-paths\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/80000hours.org\/career-reviews\/china-related-ai-safety-and-governance-paths\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/career-reviews\/china-related-ai-safety-and-governance-paths\/#primaryimage","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/02\/juniperphoton-tpYURH-D-I8-unsplash-scaled.jpg","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2022\/02\/juniperphoton-tpYURH-D-I8-unsplash-scaled.jpg","width":2560,"height":1708},{"@type":"BreadcrumbList","@id":"https:\/\/80000hours.org\/career-reviews\/china-related-ai-safety-and-governance-paths\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/80000hours.org\/"},{"@type":"ListItem","position":2,"name":"China-related AI safety and governance&nbsp;paths"}]},{"@type":"WebSite","@id":"https:\/\/80000hours.org\/#website","url":"https:\/\/80000hours.org\/","name":"80,000 Hours","description":"","publisher":{"@id":"https:\/\/80000hours.org\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/80000hours.org\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/80000hours.org\/#organization","name":"80,000 Hours","url":"https:\/\/80000hours.org\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/","url":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","contentUrl":"https:\/\/80000hours.org\/wp-content\/uploads\/2018\/07\/og-logo_0.png","width":1500,"height":785,"caption":"80,000 Hours"},"image":{"@id":"https:\/\/80000hours.org\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/80000Hours","https:\/\/x.com\/80000hours","https:\/\/www.youtube.com\/user\/eightythousandhours"]},{"@type":"Person","@id":"https:\/\/80000hours.org\/#\/schema\/person\/5c6b8b7fad8d98e829f103ac11077e8c","name":"Hide author","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/4ae4446f148f866dbfb4bca76c5006226617665555491aca2e6b84ee807b6bb8?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/4ae4446f148f866dbfb4bca76c5006226617665555491aca2e6b84ee807b6bb8?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/4ae4446f148f866dbfb4bca76c5006226617665555491aca2e6b84ee807b6bb8?s=96&d=mm&r=g","caption":"Hide author"},"url":"https:\/\/80000hours.org\/author\/guest-author\/"}]}},"_links":{"self":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/career_profile\/74634","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/career_profile"}],"about":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/types\/career_profile"}],"author":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/users\/422"}],"version-history":[{"count":2,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/career_profile\/74634\/revisions"}],"predecessor-version":[{"id":95916,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/career_profile\/74634\/revisions\/95916"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media\/76231"}],"wp:attachment":[{"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/media?parent=74634"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/80000hours.org\/wp-json\/wp\/v2\/categories?post=74634"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}