Future of Humanity Institute – Written evidence (AIC0103)

 

  1. Background on authors

The Future of Humanity Institute at the University of Oxford researches both the technical and the societal dimensions of advanced artificial intelligence (AI) and other issues that bear on the prospects for humanity’s future. Our staff brings together computer scientists, philosophers, mathematicians, social scientists, lawyers, and engineers to shed light on these issues. The founder and Director of the institute, Nick Bostrom, is the author of the best-selling 2014 book Superintelligence, which played a key role in fostering recent discussions around the long-term trajectory of AI. In 2016, our Institute launched the Strategic Artificial Intelligence Research Centre to analyze the policy implications of AI and develop recommendations for governments and industry. We are pleased to have the opportunity to share the perspective of our institute on several topics you are investigating.

  1. Uncertainty related to the nature and timing of AI progress

2.1.            Relevant committee questions: “What is the current state of artificial intelligence and what factors have contributed to this?”; “How is it likely to develop over the next 5, 10 and 20 years?”; “What factors, technical or societal, will accelerate or hinder this development?”; “Is the current level of excitement which surrounds artificial intelligence warranted?”; “What role should the Government take in the development and use of artificial intelligence in the United Kingdom?”; “Should artificial intelligence be regulated?”; “If so, how?”

2.2.            Comments:

2.2.1.            Artificial intelligence is currently experiencing a period of rapid and exciting progress, fuelled by several factors. Many new researchers are moving into the field, computing hardware has advanced significantly, data is more abundant than it was before, algorithmic innovations have been developed, and open source frameworks enable quicker replication of new ideas. More generally, AI research is benefiting from substantial public and private funding in various countries, with private funding becoming more dominant in the past five years than was the case previously. According to one estimate, global AI funding from tech companies, venture capitalists, and private equity firms was approximately £20 to £30 billion in 2016 (Bughin et al., 2017).

2.2.2.            From the 1960s to the 1990s, AI progress often failed to live up to lofty forecasts, but more recently the opposite has taken place: across a range of benchmarks including computer performance at the game of Go and image recognition, even AI researchers have been surprised by the pace of various developments. For example, on the ImageNet challenge (one measure of AI visual capabilities, in which images are classified into 1,000 different categories), the error rate of the best systems has dropped over the past several years from about 25% to well under 5% (the performance attained by a human). One way of characterizing recent progress is that “low level” cognitive tasks once considered recalcitrant to progress (such as visual perception) are now yielding to recent approaches in machine learning, and are now being combined with “high level” approaches such as symbolic reasoning that have seen more success in the past, bringing us closer to more general purpose, integrated AI systems.

2.2.3.            Looking to the longer term, there is much uncertainty about the pace of AI progress. In the most authoritative survey of AI expert opinion to date, conducted by researchers at the Future of Humanity Institute, the non-profit AI Impacts, and Yale University (Grace et al., 2017), opinions on the future of AI varied widely among 352 experts. Human-level AI, defined here as when AI systems are better than all humans at all tasks, is a policy relevant event as it will likely be associated with radical transformations in society, the economy, and other domains. The sampled AI researchers revealed a striking consensus about our uncertainty regarding when human-level AI will be developed. The aggregate view[1] of AI researchers gave a 25% chance of human-level AI in as soon as 20 years, but also a 25% chance that it won’t arrive for 100 years. Even after adding additional uncertainty to these estimates, the policy-relevant conclusion from these assessments is that we should not base our policy plans on any particular timeline for human-level AI: it may take many decades, but it may also arrive much sooner than we realize.

2.2.4.            The surveyed researchers also revealed a nuanced perspective on the potential consequences of human-level AI, with the median respondent giving a 45% probability of a “good” or “extremely good” outcome for humanity, but also 10% and 5% probabilities of a “bad” or “extremely bad (e.g., human extinction)” outcomes. For discussion of these risks, see (Bostrom, 2014).  

2.3.            Recommendations:

2.3.1.            In light of the range of expert opinions on future AI developments, and the benefits of preparedness, we recommend that the UK government not make strong assumptions about how quickly AI will develop. On issues such as technological displacement in the labor market in the coming decades, underestimating and under-preparing for AI’s impact could result in major societal disruption and lost opportunities for shared economic and social gains. For example, the aforementioned survey by Grace et al. (2017) found that experts foresee many jobs being susceptible to automation in the next two decades, such as retail jobs and truck driving.

2.3.2.            However, we also note that few researchers think it likely that human-level AI will be developed in the very near future (less than ten years), and we recommend not taking substantial action motivated by these concerns until more robust policy proposals for how to best navigate this transition have been proposed and vetted. For discussion of desiderata for such policy proposals, see a recent working paper by Bostrom, Dafoe, and Flynn (2017).

  1. Near term challenges associated with AI

3.1.            Relevant committee questions: “How can the general public best be prepared for more widespread use of artificial intelligence?”; “What are the ethical implications of the development and use of artificial intelligence?” “How can any negative implications be resolved?”; What role should the Government take in the development and use of artificial intelligence in the United Kingdom?”

3.2.            Comments:

3.2.1.            One area in which AI is likely to have a significant impact in the near term is on the nature of work. Experts vary on the speed with which job displacement related to AI might occur, and the extent and nature of jobs that will be created as a result of AI. But it is widely believed in the AI community that over the next few decades, large impacts are likely, and our survey discussed above suggests high confidence that some jobs in retail will be susceptible to automation.

3.2.2.            AI is likely to generate myriad other social and economic challenges. For example, there are legitimate and challenging political and legal issues pertaining to the appropriate development and use of autonomous vehicles, the acquisition, use, and ownership of people’s data, and the use of AI in important decision making contexts such as the granting of loans and parole.

3.2.3.            AI is likely to have potent security implications, including beneficial applications such as more effective cyber-defenses, as well as myriad possible malicious uses by terrorists and criminals. Some novel forms of attack made possible by AI, such as large-scale, highly effective automated “spear phishing” and delivery of lethal force by repurposed consumer drones, are troubling. The government will need foresight to realize these positive applications of AI to security and prevent or mitigate the consequences of the negative applications. We outline these concerns in a forthcoming public report, based on a February 2017 workshop.

3.3.            Recommendations:

3.3.1.            We recommend the UK government prepare for the possibility of significant job displacement, as well as creation, as a result of the deployment of AI in the coming decades. We recommend that the UK government consult with (among others) experts at the University of Oxford such as Michael Osborne and Carl Frey who have done seminal work on this topic, and reevaluate education and job retraining programs in light of expert views on the future of AI-related job displacement.

3.3.2.            We recommend that the UK government pursue novel, privacy-preserving data governance systems to ensure that the benefits of AI in health research, security, and other areas are realized while also ensuring appropriate protections of individual data. Ongoing work in the research area of secure and private machine learning (Papernot and McDaniel et al., 2016), for example, is potentially useful as the UK government seeks to be a leader in spurring AI innovation while protecting important societal values. Likewise, in the area of crime and terrorism prevention, AI has the potential to be a boon for security, and the UK can lead the way in developing innovative approaches to privacy-preserving AI-augmented surveillance.

3.3.3.            We recommend that the UK government consider the risks of AI being used for harmful purposes by state and non-state actors and take steps to better understand and reduce such risks. For example, some promising interventions would be “red team” exercises to determine the threats to government systems, analysis of lessons learned from other dual use technologies such as biotechnology, and exploration of the legal implications of AI-enabled threats (for example, how “data poisoning” attacks aimed at machine learning systems might be treated under existing or future laws).

3.3.4.            Furthermore, given the infrequency with which best practices in cybersecurity are adopted by individuals and organizations, we suggest that the UK government to consider recent proofs of concept of offensive AI applications in cybersecurity as a “wake up call” regarding the pace of innovation in this space, and as a reason to increase its commitment to the promotion of cybersecurity best practices.

  1. Long term challenges: building AI for the common good

4.1.            Relevant committee questions: “What are the ethical implications of the development and use of artificial intelligence?” “How can any negative implications be resolved?”; What role should the Government take in the development and use of artificial intelligence in the United Kingdom?”

4.2.            Comments:

4.2.1.            Over the long term, AI is likely to exceed human performance in most cognitive domains. This poses substantial safety risks, described in detail in (Bostrom, (2014) and Amodei and Olah et al. (2016) and endorsed as worthy of study by thousands of AI researchers (Future of Life Institute, 2015, 2017). One challenge, among others, is to ensure that the (implicit) goals of extremely competent AI systems are precisely what, upon reflection, humans would want for them. This challenge was foreseen by some pioneers of AI and cybernetics, such as Norbert Wiener who said in 1960: “We had better be quite sure that the purpose put into the machine is the purpose which we really desire.”

4.2.2.            Active research on AI safety is being conducted by labs in industry (including DeepMind in London), non-profits (such as OpenAI in the United States), and in academia (including at UC Berkeley, the University of Montreal, and the Future of Humanity Institute in Oxford). While these problems seem solvable in principle—we are not aware of any reason why an arbitrarily intelligent AI system, appropriately designed, could not be aligned with human values--in practice addressing this issue seems likely to require substantial research, foresight, and prudence.

4.2.3.            In the coming decades, AI developers will face a variety of incentives and pressures. Scientific, economic, and other forms of competition, especially between countries, could lead to substantial pressure to quickly develop and deploy advanced AI systems. These pressures risk leading to insufficient attention to safety and other social considerations. We will be better off if leading AI developers in all countries commit to, and are able to work towards, developing AI for the common good.

4.2.4.            If designed and governed appropriately, AI has the potential to be extremely positive-sum in its societal impacts--for example, it may enable rapid economic growth and improved health. Ensuring these benefits are realized is an additional reason to pursue cooperative development of AI and avoid potentially dangerous racing.

4.3.            Recommendations:

4.3.1.            We recommend that the UK government step into a global leadership role in developing international norms and institutions for building AI for the common good: in a way beneficial for humanity as a whole. This common good principle was articulated by Bostrom (2017), endorsed by many signatories in the AI community in the Asilomar AI Principles (Future of Life Institute, 2017), and discussed further in a report from the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems (2016). The UK government could begin by making a commitment to fostering AI research and development for the common good. Such a commitment would signal the UK’s leadership in AI governance, commensurate with its prominent role in AI and AI safety research. What specifically this commitment should entail, and how best to realize it, will require creative exploration in partnership with industry, researchers, the public, and other countries. A key institution to collaborate with on this front is the Partnership on Artificial Intelligence to Benefit People and Society, which includes many relevant companies as well as non-profits, such as the Future of Humanity Institute, as partners.

4.3.2.            We additionally recommend that the UK government explore the possibility of creating or joining international research efforts in the domain of AI R&D, and doing so in part on the basis of which projects are committed to the common good. The UK government would thereby contribute to building beneficial norms and institutions promoting international cooperation on developing AI. The UK government should support existing efforts for international dialogue and governance about AI, such as that being promoted by the United Nations.

  1. Research areas appropriate for public support

5.1.            Relevant committee questions: “In what situations is a relative lack of transparency in artificial intelligence systems (so-called ‘black boxing’) acceptable?”; “When should it not be permissible?”; “How can the general public best be prepared for more widespread use of artificial intelligence?”; “What are the ethical implications of the development and use of artificial intelligence?” “How can any negative implications be resolved?”; What role should the Government take in the development and use of artificial intelligence in the United Kingdom?”

5.2.            Comments:

5.2.1.            The UK is in a leading position today in the field of AI, and has an opportunity to build on this lead with a long-term commitment to AI research and development.

5.2.2.            While AI safety and policy research are currently being conducted, in part, by private actors, it is probable that the level invested is socially suboptimal given that this area is characterized by substantial global positive externalities. Relatedly, some AI applications that are unlikely to be immediately profitable (such as applications specifically aimed at achieving the UN Sustainable Development Goals) are opportunities for public investment to ensure that these public goods are created. Finally, research aimed at better understanding the policy implications of AI, while again being supported in part by private actors, is clearly in the broad public interest and may be currently undersupplied.

5.3.            Recommendations:

5.3.1.            The UK government should double down on its strong competitive position in AI by investing substantially in AI research, development, and education. The UK government should also develop a robust, long-term funding portfolio supporting research on AI safety, policy, and socially beneficial applications. Examples of technical safety research includes work building AI systems that cooperatively learn human preferences and social norms (Hadfield-Menell et al., 2016; Christiano et al., 2017) and designing AI systems that are reliable even under adversarial attack (Papernot and McDaniel et al., 2016; Amodei and Olah et al., 2016). Examples of policy research include characterizing the potential global externalities from AI development and crafting institutions for international cooperation. These investments should be informed by ongoing dialogue with experts in AI and other areas in order to ensure that new government funding complements existing research trajectories in industry and academia.

  1. Other recommended government actions

6.1.            Relevant committee questions:What role should the Government take in the development and use of artificial intelligence in the United Kingdom?”

6.2.            Comments:

6.2.1.            There is an ongoing flow of talented AI researchers from academia into industry, and as a result of demand exceeding supply, these researchers can currently command very high salaries. These salaries as well as other benefits of working in industry (such as proximity to other talented researchers and access to large amounts of data and computing power) present a formidable obstacle to the UK government (and academia) in recruiting AI experts, especially in the area of machine learning.

6.2.2.            At the same time, as AI is increasingly adopted in society, it is perhaps more important than ever before that the UK government recruit such experts, suggesting a need for creative thinking.

6.3.            Recommendations:

6.3.1.            We recommend that the UK government consider creative approaches for recruiting AI experts (including both technical and policy experts) into government, in order to put the government in a better position to proactively address problems and exploit opportunities as they arise. We recommend that the government consider lessons learned from other domains, such as finance and law, where competition for talent with the private sector has been fierce, and consider novel initiatives such as special authority for a department to pay higher than usual salaries.

6.3.2.            Finally, we note that beyond just salaries, it will be important to motivate recruits with an exciting mission (Brundage and Bryson, 2016). The formation of a new agency for the purpose of developing and funding socially beneficial AI, and steering AI’s social impacts in a positive direction, might be one such approach. A standing Commission on Artificial Intelligence, as suggested in written evidence submitted by the Future of Humanity Institute and others previously and endorsed by the Science and Technology Committee’s report on robotics and intelligence, could be a focal point for recruitment.

  1. Offer of further dialogue: We welcome the opportunity to provide further information. The Future of Humanity Institute is particularly well placed to help the Lords Select Committee understand issues related to long-term AI safety, the range of expert opinions on AI’s future, the near-term intersection of AI and security, international dynamics around AI, and the global governance of AI.

 

Miles Brundage and Allan Dafoe

On behalf of the Future of Humanity Institute

University of Oxford

https://www.fhi.ox.ac.uk

 

References

Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., and Mané, D. 2016. “Concrete Problems in AI Safety,” available online at https://arxiv.org/abs/1606.06565

 

Bostrom, N. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford University Press: Oxford.

 

Bostrom, N. 2017. “Strategic Implications of Openness in AI Development,” Global Policy, Vol. 8, Issue 2, May 2017, pages 135-148, available online at http://onlinelibrary.wiley.com/doi/10.1111/1758-5899.12403/full

 

Bostrom, N., Dafoe, A., and Flynn, C. 2016. “Policy Desiderata in the Development of Machine Superintelligence,” working paper, Future of Humanity Institute, available online at https://nickbostrom.com/papers/aipolicy.pdf

 

Brundage, M. and Bryson, J. 2016. “Smart Policies for Artificial Intelligence,” available online at https://arxiv.org/abs/1608.08196

 

Bughin, J, Hazan, H., Ramaswamy, S., Chui, M., Allas, T., Dahlström, P., Henke, N., and Trench, M. 2017. “Artificial Intelligence: The Next Digital Frontier?,” McKinsey Global Institute discussion paper, available online at http://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/how-artificial-intelligence-can-deliver-real-value-to-companies

 

Christiano, P., Leike, J., Brown, T., Martic, M., Legg, S., and Amodei, D. 2017. “Deep Reinforcement Learning from Human Preferences,” available online at https://arxiv.org/abs/1706.03741

 

Grace, K., Salvatier, J., Dafoe, A., Zhang, B., and Evans, O. 2017. “When Will AI Exceed Human Performance? Evidence from AI Experts,” available online at https://arxiv.org/abs/1705.08807

 

Future of Life Institute, 2015. “An Open Letter: Research Priorities for Robust and Beneficial Artificial Intelligence,” text and signatories available online at https://futureoflife.org/ai-open-letter/

 

Future of Life Institute, 2017. “Asilomar AI Principles,” text and signatories available online at https://futureoflife.org/ai-principles/

 

Hadfield-Menell, D., Dragan, A., Abbeel, P., and Russell, S. 2016. “Cooperative Inverse Reinforcement Learning,” available online at https://arxiv.org/abs/1606.03137

 

Papernot, N., McDaniel, P., Sinha, A., and Wellman, M. 2016. “Towards the Science of Security and Privacy in Machine Learning,” available online at https://arxiv.org/abs/1611.03814

The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, 2016. Ethically Aligned Design: A Vision for Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems, Version 1. IEEE. Available online at

http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html

5 September 2017

 


[1]The mean of the individual cumulative distribution function estimates, also called the “mixture” distribution.” The median is similar.