08 December 2017 - 10 December 2017

Machine Learning and Artificial Intelligence: how do we make sure technology serves the open society?

Chair: Professor Sir Nigel Shadbolt PhD FRS FREng FBCS

View event album here


There has been wide discussion of the impact of AI on the economy and the workforce at one end of the spectrum and on the possible coming singularity of humans and machines at the other. Relatively little work has been done to date on the impact of AI on societies, governments and the relations between states, and between states and companies. This was the ground we sought to cover, taking account of the impact on the economy and work. Even though this was a group with significant expertise, occasionally we were reduced to silence because we just did not know how to answer the question or even which questions to ask – the potential disruptions trail uncertainty in their wake. The most recurrent conclusions during the conference were “we don’t know yet”; “we will have to wait and see”; “the predictions are all over the place” or some other variant of radical uncertainty. 

Within the group, there was little doubt that AI will be one of the major factors shaping the future of humanity and human societies, even though we rejected some of the more hyperbolic claims on the likely speed of progress. For most of history, in the assessment of human affairs it has been a safe bet to assume that the impact of a technology is overstated and that eternal human, social and cultural factors will remain key determinants of what happens next. AI might be different, or even, as one participant put it, “the last technology that humans invent”.
Professor Sir Nigel Shadbolt chaired a group that included computer scientists, entrepreneurs, business leaders, politicians, journalists, researchers and a pastor focused on the relationship between AI and theology.

Analysis
In order to understand the potential impact of AI (used in this note as a set of advanced computing methods including machine learning) on the world, we need clarity, both on the progress made and on what remains science fiction. AI has been around a long time. The simulation of the processes of the human brain remains challenging and elusive. In the view of this group at least, we are a long way from achieving general artificial intelligence. We are not helped by the fact that we do not fully understand how many aspects of our own cognition and our own consciousness work. 

However, advances in deep learning are leading a revolution in narrow artificial intelligence – the solving of specific pattern recognition and predictive problems. The advances are possible thanks to the existence of vast amounts of digitally generated data, the Cloud storage to hold it and the computing power to process it. (We should note that there is now also progress on making AI neural networks more computationally efficient and less data and storage hungry). Some human tasks – for example object and facial recognition through computer vision – are becoming straightforward for AI and superhuman capabilities are developing at pace, whilst others, superficially simple and easy for humans to accomplish, are developing slowly. AI’s role in the economy and society will develop at different speeds in different fields.

A big advantage for AI versus human intelligence is that AI only needs to learn something once in one place and, provided the insight is shared, the knowledge can then be incorporated everywhere at once and for ever. This contrasts with human knowledge – we learn things, we accumulate knowledge, we pass some of it on through our conversation, teaching, behaviour, stories, publications and broadcasts of one sort or another; and then we die. The diffusion of innovations between human beings is partial and social and comes in generational waves. AI diffuses innovation at machine speed. If sufficient data is available then AI will inexorably crack all the problems that are subject to machine learning and incorporate that knowledge in subsequent evolutions of the system.

AI’s strength is also at the heart of its risk. Whereas the evolution of human intelligence is slow, the evolution of machine intelligence in narrow applications can be very rapid with many thousands of generations of learning and improvement deployed in a short space of time. An AI function that departed from our intentions – almost certainly because they were not set with sufficient precision by human designers – could be a long way down the track before anyone notices. What AI comes up with in response to a problem can be surprising to us. This is because in a deep sense the neural processes of AI are profoundly alien and not constrained by human cultural expectations. For AI design, for example, the structure has to meet the criteria set and it does not have to conform to an ingrained expectation of what that object should look like or the components it should contain. Examples were given of a complex wing structure, designed by an AI programme to be as light and as strong as possible, that came out looking like an alien craft; or an integrated circuit were there was no Wi-Fi chip because the AI design programme had found a more efficient way to incorporate that functionality through software. Referral back to human beings for human decisions in the development of AI could mitigate risks but clearly will also impact on efficiency and speed of development.

This links to another observation made at the conference. We are concerned about algorithmic transparency and explainability because we are worried about algorithmic bias. We want to be able to understand why the machine has made decisions. But what algorithmically-driven decision making also does is expose the bias and illogicality of existing human decision-making processes. AI will increasingly confront us with the fact that many of our decisions are grounded in a difficult to separate mix of values and prejudices, rather than a precise calculation of maximum good or minimum harm from any action. AI action is probably already more explainable than many human actions. This leads to another major question raised by this conference: will AI become universal and international through its accelerated evolution, or will it more likely carry strong national and cultural characteristics from its makers? If national variants of AI emerge, will their characteristics be diluted or intensified through their evolution?

AI Nationalism
We explored whether or not AI will become a defining capability of state power, summed up in the phrase, “AI nationalism”. AI may mean that population size and geographical area are no longer important in determining economic and hard power. By this reckoning, Deepmind, now Google Deepmind, is the most important company on Earth and might have been nationalised and treated as a vital national asset to the UK, rather than being sold to a company based in another country, even if a close ally. Comparisons were made with the Manhattan Project to develop a nuclear weapon. 

Smaller countries will face challenges in developing their AI capabilities, because of the smaller markets and data pools available to them. They will probably need to align themselves (as with politics and defence) with a larger power, which is likely to mean either China or the United States. That said, smaller powers and regions and cities should look for AI advantage in unique data sets available to them alone, for example the health care data available in Wisconsin through EPIC (a US health care database), or the large sets of medical images potentially available in national health services like the UK. Data from particular social models, for example as in Europe, might also have unique value, although it was also argued that Europe’s focus on privacy and regulation could  hinder innovation. The UK could benefit from its scientific tradition of developing structured data sets, which are helpful in training machine learning systems. A complementary approach is represented in the open data movement where states regard core data assets as a public good that form part of any national infrastructure. As such their widespread availability is an accelerator to both public and private innovation and the delivery of services. It was also proposed that the UK should use visa and immigration schemes to attract more of the 100,000 or so people with the skills to matter globally on AI; allowing them easy entry and tax breaks. 

All the indications are that China views the development of AI capabilities as the new space race and one that it is in a position to win. Deepmind’s work on Go, a Chinese game, was the starting gun in that race. China’s most powerful advantage is access to data. This is based first of all on a lower prioritisation of privacy than in the West. Building on this, the major Chinese technology companies have been licensed by the state to develop spine services such as WeChat and Ali Baba, from which other services can flow. The connectivity of the various services generates and makes accessible a wealth of data. An example is the rapid development of a mobile payment system, as opposed to the much slower progress on this in the West. An authoritarian society like China could also have an advantage on rolling out machine learning-based systems like autonomous vehicles. The nature of machine learning means that the more mistakes, or accidents, one is willing to tolerate in the short term, the quicker the progress to effective and therefore safer algorithms in the longer term. This is the kind of utilitarian calculation (the greatest good to the greatest number) that might be more acceptable for an authoritarian society than an open society.

Data monopolies – “data-opolies” – and algorithmic innovation
A central element in AI nationalism is that AI (because of the importance of deep learning) is all about the data. Having lots of data gives a company and/or country a significant competitive edge in training algorithms. Many in the group saw AI algorithms becoming ultimately open sourced and freely available, with the competitive advantage from algorithmic innovation reduced to a low level. What would matter most in this scenario would be access to the right data. This underpinned, for example, Google’s decision to release the Tensorflow suite of capabilities as a public good. At the end of the day, it was claimed, Google had so much data that it would always be ahead of the game. The other big American tech platforms – Facebook (and Instagram and Snapchat), Apple and Microsoft through LinkedIn – also have significant data stores of course. But these data stores remain partial and segmented compared to the dominance of the Chinese data market by a couple of companies and by the state. Some participants saw a new form of data colonisation by China and California, with the raw resource of data extracted globally; shipped back home and refined; and then sold back to the world as higher value products, in this case as AI algorithms. If you were not in China or California then this model should be challenged and changed. 

AI, data internationalism and privacy
AI nationalism is not inevitable. An alternative is to campaign for increasingly open data. In this internationalist model as much data as possible is made publicly available and structured in useable form. This would put a premium on algorithmic innovation – the same data is available to everyone and what matters is how cleverly you use it. Although global search data is dominated by Google, social data by Facebook and Chinese data by the Chinese companies, the data generated by the Internet of Things will dwarf that available at present. It is not a given that this data should be dominated by a few companies as at present. In addition, the bulk of the world’s existing data is not in the hands of the tech giants but on the servers of corporations. Releasing the data from within corporate firewalls could accelerate greatly the development of AI algorithms.

A possible way to democratise data would be for data to belong to the individuals who generate it rather than the companies who capture it. This would arguably give individuals better control over privacy but also allow a more vibrant and competitive market in data-based services and the development of AI algorithms to emerge. (Many companies would argue that this is already the case and that individuals license use of their data willingly to the companies through the service agreements they accept.)  

Privacy is at the heart of releasing the data necessary for the development of AI. For some, this means that we need to readjust our expectations of privacy – put dramatically, are we going to put our privacy, for example on our healthcare, ahead of saving people’s lives? Should we not reverse expectations so that the assumption is that our data will be shared and used to good effect? And if we do not do this, then how do we expect to compete with societies, for example China, that are willing to take this collective approach? Others see innovative work on privacy (such as differential privacy) as delivering ways to get algorithmic benefit from data whilst maintaining its anonymity. This would have great value if it could unlock commercially and personally sensitive data for wider use. 

For the time being, though, we remain on a trajectory towards AI nationalism. Regulatory frameworks should be seen in this light as well as through their stated objectives. For example, China was purported to use standards to create competitive advantage for its own data and AI companies. The technocratic leadership of China was alleged to have used this indirect form of protectionism to grow China’s own technology industry at pace. EU data protection was seen by US participants to be as much about restraining international competition as preserving citizens’ privacy. To Google’s competitors, Google was enthusiastic about the General Data Protection Regulation because it preserved Google’s existing competitive advantage as a holder of huge amounts of data.

AI and relations between states and between states and companies
AI is going to be disruptive for states and statecraft because it can cut across the first principles of territoriality and sovereignty through the medium of the Internet, delivering one state and culture’s view of the world directly to the citizens of another. AI will also increasingly cut across the definition of what is a weapon and what is not. AI is likely to become a component in the development of many different types of weapons, from smart ammunition (which can trim course in flight and decide whether or not to explode) to cyber-attacks. AI will likely complicate attribution of cyber-attacks, further expanding the realm of ambiguous warfare.

Mass hyper-personalised cyber spear phishing is a possible example of how AI might transform cyber warfare. At present some of the most effective cyber-attacks are those that have been tailored to convince individuals that they are receiving a message from a trusted contact. Imagine if this could be done at scale by AI – tens of thousands of personalised attacks launched simultaneously in order to get the malware payload on to a particular system. In response to this kind of threat, governments, companies and societies will need multiple layers of defence, which in themselves will likely depend on AI. The combination with cyber – both offence and defence – is one way in which AI will likely represent a “singularity of power” for states. States will need to build national resilience which begins with a much greater degree of education and awareness from individuals. In order to build understanding of the art of the possible, states will need national champions on AI (as well as cyber). The nature of AI and the reliance on data will increasingly blur the line between states and companies: some of the big tech companies already have capabilities for disruption of other states that could arguably amount to acts of war if implemented. Meanwhile, states will become increasingly dependent for war-time capabilities on AI capabilities developed in the commercial world. Access to these capabilities will be a crucial factor in a state’s power of deterrence and freedom of action.

Faced with this new world, states will also need to develop new international norms for the use of AI and cyber, that might include kinetic (or even nuclear as recently floated by the US Department of Defence) responses. All this could demand in time a new “digital Geneva convention” as proposed by Microsoft, setting out the acceptable scope of the deployment of weaponised AI and cyber warfare. That a company should be proposing such an idea is in itself indicative of the shifting line between state and company power in an AI dominated world.

AI, democracy, society and religion
The ultimate impact of AI on democracy is uncertain and unknown, perhaps unknowable. It is not clear if AI will concentrate power within society or make it even more diffuse. As with the spear phishing example above, AI will allow companies and political parties to communicate with consumers and citizens in an ever more tailored way. This will make advertising and persuasion, as during an electoral campaign, ever more convincing. The consumer and citizen will, though, over time also become more sophisticated, aware of these capabilities and have more access to analytical tools. The ultimate balance is hard to predict.

In the near term there is plenty of low hanging fruit where the application of AI could make government better and more efficient than at present, for example through the smarter targeting of services and benefits. This will have implications for data privacy – and for government employees in terms of skills and numbers of jobs.

Constraining AI development now through regulation or a watchdog would be premature. The private and public sectors need to experiment and to make progress, in order to understand the opportunities and implications of AI. If we do not do so, we will fall behind other societies and economies. AI could contribute to many of the big challenges we face and make our societies and economies better. But nonetheless more work is needed going forward on the safety and security aspects of AI and this is poorly supplied by the market. This is an area where governments could have a positive impact and fill an important gap.

In many areas of the United States, particularly those not on the coasts and which have not benefitted directly from the technological revolution, the debate over AI is taking on an increasingly angry tone in religious communities. Some fundamentalist Christians view the development of AI as quite literally “summoning the demon”, both ungodly and dangerous.

AI, the economy and work
This is another area of radical uncertainty. Economics already struggles to predict outcomes for traditional economies given the complexity of human factors at play and there is no reason to suppose it will fare better when it comes to the addition of AI and a new layer of automation to the mix. The economic predictions for the impact of AI are “all over the place”.

Experience to date of the introduction of new technologies has been that in the end, after a “rough patch”, the economy rebalances, new workers acquire new skills and the economy ends up with more jobs and greater value than before. The data shows that jobs follow robots. This does not solve the problem for the individuals and families who have to live through the rough patch but for society as a whole, and over the long term, the outcome is positive. This was the least alarming and most promising of the scenarios put forward. There would be significant disruption and hardships for pockets of people, and especially for white collar workers doing repetitive analytical or administrative tasks, but our children would end up with more satisfying and more human lives as a result of the automation of many areas of work by AI. There would still be many blue collar jobs requiring skill and dexterity. There would still be a demand for human creativity, intuition and empathy and we would see a large expansion in demand for craftspeople, artists, entertainers, teachers and carers, which in consequence would become better paid professions. For technology itself, it takes more than computer scientists to create effective AI algorithms and software – a basket of skills and experience is required from psychology through to graphic design. Even if the economic value created by AI companies will require a relatively small and highly skilled segment of the workforce, where surplus value is created then the economy is good at creating spin off industries such as entertainment, catering, interior design and other services. Human beings have conjured vast industries from spices, fashion and other objectively inessential needs and will do so again. 

At current levels of AI and automation, then, the best results on intellectual tasks (for example in drug design) are achieved not through humans or AI working alone but in tandem. In another example, industrial robots are increasingly being designed to work with humans on a production line, rather than to replace them. Human training of robots is proving efficient in developing AI capabilities.

If this is the scenario ahead of us, then the challenge will be how we tax the new value created (which will often be capital rather than wages) without driving away innovation and expertise to more welcoming regimes. Traditional economic tools may be sufficient to cushion those living through the rough patch, for example a minimum wage and welfare benefits. But studies show that a minimum wage also drives automation, making the problem of job losses more acute. 

We were not able, however, to dismiss more extreme scenarios where much more of human work is replaced by AI and automation. AI is developing rapidly and it is hard to see why it will not be better than human beings at some of the new jobs that we will develop, as well as the old ones. Even human qualities such as creativity and intuition are associated with deep knowledge of a particular domain. AI is very good at ingesting large amounts of material and mastering it. AI-written journalism is already quite convincing. AI-composed music can be already be enjoyed. It could be argued that a lot of popular human culture is already quite formulaic and artificial and that the markets for genuine human virtuosity and originality are small.

If this is the case, then we should expect to see much more radical shifts in the shape of our economies with a tendency towards ever greater inequality. At worst this could see the end (as was suggested at Ditchley’s conference on the future of the West in March 2017) of the bubble in the cost of human labour that has underwritten the enlightenment, the emergence of liberal humanism and democracy. The economic and political challenge would be magnified: how to transfer resources from a wealthy international elite clustered around AI economies to the rest of human society.

The spectrum of future scenarios makes it hard to say which economic systems are best suited for the future. Low tax economies could have an advantage because they will attract the AI industries and entrepreneurs. On the other hand, high tax European style economies and social models could be better placed to cope with intensifying social inequality and the need for the redistribution of wealth, provided that is that they can remain solvent. It was noted that economic change often leads to political change. 

Ideas for action and reflections
There was strong support for more work on AI safety. We can rely on the market to drive innovation but examination of the risks for governments and societies is not well supplied. This is an area where governments could have a big impact. There could be a particular role for the UK which could build on the Turing Institute and the newly announced Committee for Data Ethics. Work on AI safety could deliver opportunities for commercial innovation. The focus in the short term would be on the safety of systems involving AI components and in the longer term on the potential disruptive impact of AI on states and societies.

We should not lose our optimism about the positive transformative impact AI will have on the economy and society but we should not lose sight either of the potential unintended consequences. We need a model that defines potential warning signals that the transformation of societies and economies is not going the way we hoped. There needs to be more collaboration between economists and AI experts to produce better economic models on the potential implications of the roll out of AI.

Trade Unionists are weak in the new economy and trade unionists consequently tend to have a poor understanding of what AI could mean. We need better representation for labour in the discussion of the future.
 
The individual citizen similarly needs a better understanding of what AI could mean, in terms of both the opportunities and the threats. People need to plan for flexibility and lifelong personal development. This is not about learning coding but about understanding what core skills will remain relevant in the digital age and developing them throughout a career. This might mean science but equally creativity and the Arts could be viable options. It is routine and repetitive work that will be affected most deeply and earliest, whether white collar or blue collar.

There needs to be an expanded discussion on data and privacy that takes account of the crucial role of data in developing AI capabilities. There also needs to be more research on how value can be released from data without compromising privacy. It is possible that the innovation advantages of data aggregation in authoritarian states will outweigh the innovation advantages of liberty. We need an answer that preserves liberty and dignity.

Governments need to look at what has happened in the commercial world of technology and draw conclusions from how those who have done best have substantial data monopolies combined with innovative cultures. They also need to remember Churchill’s comment that the “empires of the future will be the empires of the mind”. This is coming to pass through AI.

States need to make sure they have access to cutting edge AI technology combined with access to data. Free market and democratic states need to make sure they have appropriate responses to centrally planned industrial strategies in authoritarian states, noting that first movers gain tremendous advantage in these new industries. 

Major technology companies need to understand that the AI capabilities being developed now will take them beyond the purview of the economic elements of the state – competition commissioners and the like; and even beyond law enforcement and security; and into the realm of hard power and deterrence that really determines what a country can do and what it can’t. Different rules will likely apply.

AI nationalism is not the best outcome and will be hard to separate from an AI arms race. AI nationalism would represent a failure of international relations and multilateralism. We should look to find ways to work with powers such as China on these issues, aiming to internationalise the development of AI and the sharing of data to the benefit of all. If we can agree common interests, then we may be able to enshrine these in international conventions and a set of new norms and rules.

We will need to look hard in the years ahead at the interaction between AI and other disruptive technologies, such as quantum computing and bio-engineering. Quantum computing could maximise AI’s potential and the risks through speed of evolution, while bio-engineering brings that accelerated evolution, again with opportunities and risks, literally to shape humanity’s future.

This Note reflects the Director’s personal impressions of the conference. No participant is in any way committed to its content or expression.


PARTICIPANTS

CHAIR: Professor Sir Nigel Shadbolt PhD FRS FREng FBCS  
Principal, Jesus College Oxford, and Professorial Research Fellow in Computer Science, University of Oxford; Visiting Professor of Artificial Intelligence, Southampton University; Chairman and co-Founder, Open Data Institute; member, UK Data Advisory Board. Formerly: Information Advisor to the UK Government; Founder and Chief Technology Officer, Garlik Ltd; Director, AI Group, and Allan Standen Professor of Intelligent Systems, University of Nottingham.

AUSTRALIA
Dr Connor Rochford  

Rhodes Scholar; MPhil Candidate in Politics (Political Theory), Balliol College, University of Oxford. Formerly: Business Analyst, McKinsey & Co. (2017); Senior Analyst, Australian Health Policy Collaboration (2016-17).

AUSTRALIA/UNITED STATES OF AMERICA
Dr Peter Eckersley  

Chief Computer Scientist, Electronic Frontier.

CANADA
Ms Rebecca Finlay  

Executive Group Member, CIFAR. Formerly: Senior Advisor, Communications, Toronto Region Research Alliance; Group Director, Public Affairs and Cancer Control, Canadian Cancer Society and National Cancer Institute of Canada; First Vice President, Financial Institution and Partnership Marketing, Bank One International; Vice President, Member Business Management, MasterCard International.
Mr Logan Graham   
Rhodes Scholar; PhD Candidate in Machine Learning, University of Oxford; co-Founder, Rhodes Artificial Intelligence Lab; Researcher, Oxford Martin School Future of Technology and Employment programme; World Economic Forum Global Shaper. Formerly: Researcher, UBC Vancouver School of Economics; co-Founder, Awake Labs, Yunus & Youth.
Mr Paul Halucha  
Assistant Deputy Minister, Industry Sector, Innovation, Science & Economic Development Canada (ISED) (2016-). Formerly: Director General and Associate Assistant Deputy Minister with responsibilities for marketplace framework policies including Intellectual Property, the Investment Canada Act and the Competition Act, ISED (2012-16); Chief of Staff to Deputy Minister Richard Dicerni, ISED (2009-12); Environment Canada; Privy Council Office.
Mr P. Thomas (Tom) Jenkins OC, CD, FCAE, LLD, MBA, MASc, B Eng&Mgt  
Chair of the Board, OpenText TM, Waterloo, Ontario; Chair, National Research Council of Canada; tenth Chancellor, University of Waterloo; Chair, Ontario Global 100 (OG100); co-founder, Communitech, Waterloo; board member: Manulife Financial Corporation and TransAlta Corporation; member, Business Council of Canada and School of Public Policy, University of Calgary; Advisory Council member, Royal Canadian Air Force; honorary Colonel, RCAF 409 Tactical Fighter Squadron; co-Chair, Business Higher Education Roundtable; co-Chair, Advisory Council, Governor General of Canada Innovation Awards Program. 2017 Companion of the Canadian Business Hall of Fame.
Mr John Stackhouse  
Senior Vice President, Office of the CEO, Royal Bank of Canada; Senior Fellow, Munk School of Global Affairs, University of Toronto; Senior Fellow, C.D. Howe Institute, Toronto. Formerly: Editor-in-Chief, The Globe and Mail (2009-14); Business Editor, The Globe and Mail (2004-09); National Editor, Foreign Editor, Correspondent-at-Large, Foreign Correspondent. Board Member, The Canadian Ditchley Foundation.
Mr Iain Stewart  
President, National Research Council of Canada (2016-). Formerly: Associate Secretary, Treasury Board of Canada (2015-16); Assistant Secretary, International Affairs, Security and Justice, Treasury Board of Canada (2014-15); Industry Canada: Assistant Deputy Minister, Strategic Policy; Associate Assistant Deputy Minister and Director General, Science and Innovation; Director of Consumer Industries; Assistant Vice-President, Research, Dalhousie University.

CZECH REPUBLIC
Professor Michal Pechoucek PhD, MSc, Ing 
 
Director, Artificial Intelligence Center, and Head, Department of Computer Science, Czech Technical University in Prague; member, board of directors, International Foundation for Autonomous Agents and Multi-agent Systems; Honorary member, Artificial Intelligence Application Institute, University of Edinburgh; venture partner, Evolution Equity Partners. Formerly: Co-founder, BlindSpot.AI, acquired by Adastra Group, 2017; New Europe 100 Challengers list (2014); co-founder, Cognitive Security, acquired by CISCO Systems, 2013.

FRANCE
Professor Benoît Gallix MD, PhD 
 
Chairman, Department of Radiology, McGill University, Montreal; Radiologist-in-Chief, McGill University Health Centre; Special Advisor, Center for Innovative Medicine, Montreal. Formerly: Chairman, Department of Medical Imaging, Montpellier University Hospital, France.

FRANCE/UNITED STATES OF AMERICA
Ms Kerstin Vignard 
 
Deputy to the Director/Chief of Operations, United Nations Institute for Disarmament Research; Consultant to UN Groups of Governmental Experts on Developments in the field of Information and Telecommunications in the Context of International Security: 2009-10, 2012-13, 2014-15 and 2016-17; Institutional Lead on Emerging Security Issues; founder and Editor in Chief, Disarmament Forum (1999-2012).

GERMANY
Dr Vyacheslav Polonski  

ESRC Scholar; DPhil Candidate, Network Science, University of Oxford; Founder and CEO, Avantgarde Analytics; Researcher, Oxford Internet Institute; World Economic Forum Global Shaper and member of WEF Expert Network. Formerly: Data science & social network analysis consultant.
Mr Carl Rietschel  
Pershing Square Graduate Scholar, MSc in Computer Science and MBA Candidate, Saïd Business School, University of Oxford. Formerly: Consultant, (Junior) Associate, The Boston Consulting Group, Hamburg (2014-17). 

IRAN/UNITED KINGDOM
Dr Shahram Mossayebi 
 
Co-founder and CEO, Crypto Quantique; quantum cryptographer and a former cybersecurity consultant.

IRELAND
Dr Sean Ó hÉigeartaigh 
 
Executive Director, Centre for the Study of Existential Risk, University of Cambridge; co-developer, Leverhulme Centre for the Future of Intelligence (CFI) and Project Leader, CFI's 'Policy, Responsible Innovation and the Future of AI' research strand. Formerly: Project Manager, Oxford Martin Programme on the Impacts of Future Technology (2011-15).

ITALY/UNITED STATES OF AMERICA
Mr Louis DiCesari  

Group Head of Big Data Implementation, Vodafone Group Services Ltd, London.

NETHERLANDS
Dr Ansgar Koene  

Senior Research Fellow & Policy Impact lead, Horizon Digital Economy Research institute, University of Nottingham; Research co-investigator, EPSRC UnBias project; Chair, IEEE P7003 Standard for Algorithmic Bias Considerations working group.

UNITED KINGDOM
Mr Oliver Buckley
  
Deputy Director, Digital Charter & Data Ethics, Department for Digital, Culture, Media and Sport (2017-). Formerly: Deputy Director, Policy and International, Government Digital Service and Government Innovation Group, Cabinet Office (2013-17); Deputy Director, Head of International Transparency and Open Government Team, Cabinet Office (2013); Senior Policy Adviser, Prime Minister's Strategy Unit (2010-13); Case Team Leader, Monitor Deloitte (2004-10).
Mr Matt Clifford MBE  
Co-Founder and Chief Executive, Entrepreneur First, London (2011-); co-Founder and Non-Executive Director, Code First: Girls (2013-); Trustee and Non-Executive Director, techfortrade (2013-); Advisory Board member, Silicon Valley Comes to the UK, Cambridge (2012-).
Mr Tim Colbourne  
Director of Policy, Open Reason. Formerly: Deputy Chief of Staff to the Deputy Prime Minister (2014-15); Downing Street Policy Unit (2010-14).
Mrs Karen Danesi  
Deputy Director Cyber Security Capability, NCSC.
Dr James Field BSc, MRes, PhD  
Chief Executive Officer and Founder, LabGenius Ltd, London.
Miss Sophie Hackford  
Futurist and speaker. CEO, spin out AI startup from the University of Oxford; Chief Innovation Officer, Not Just a Label; on a research sabbatical focusing on science and tech developed outside traditional centres of innovation. Formerly. Director, WIRED Consulting and Education, Wired Magazine; Director of Strategic Relations, Singularity University; Head of Development, Oxford Martin School, University of Oxford; Client Development Manager, New Philanthropy Capital.
Mr Ian Hogarth  
Angel investor.
Ms Arohi Jain  
Economist; Project Lead, The AI Initiative, Harvard Kennedy School. Formerly: Senior Strategist, Impact Squared.
Mr Chris Mairs CBE, FREng  
Venture Partner, Entrepreneur First; Trustee, The Raspberry Pi Foundation; Fellow, Royal Academy of Engineering; Honorary Fellow, Churchill College. Formerly: Chairman, Magic Pony Technology; co-founder  and CTO, Metaswitch Networks.
The Baroness Rock  
Life Peer, House of Lords; Member, House of Lords Select Committee on Artificial Intelligence; Non-Executive Director, Imagination Technologies PLC (2014-); Visiting Parliamentary Fellow, St Antony's College, University of Oxford (2017). Formerly: Vice Chairman of the Conservative Party with special responsibility for business engagement.
Mr Paul Ryan  
Director, Watson Artificial Intelligence, IBM UK and Ireland (2017-).
Mr Richard Spearman CMG, OBE  
Group Corporate Security Director, Vodafone (2015-). Formerly: Her Majesty's Diplomatic Service (1989-2015); Save the Children Fund (1984-89).
Mr Jeffrey Thomas  
Co-Founder and Chairman, Corsham Institute (2013-); co-Founder and Non-Executive Chairman, UKCloud Limited (2012-); Founder and Non-Executive Director, ARK Data Centres (2005-); Founder and Chairman, Hartham Park, (1997-).
The Rt Hon. the Lord Willetts  
Non-executive Board member, UK Research and Innovation (2017-); Executive Chair, Resolution Foundation (2015-); Visiting Professor, King's College London (2015-); Chair, British Science Association (2015-). Author, 'A University Education'; 'The Pinch – How the baby boomers took their children's future and why they should give it back'. Formerly: Member of Parliament (Conservative) for Havant (1992-2015); Minister for Universities and Science, Department for Business, Innovation and Skills (2010-14).
Professor Jeremy Wyatt  
Professor of Robotics & Artificial Intelligence, University of Birmingham. Formerly: Co-Director, Centre for Computational Neuroscience & Cognitive Robotics (2010-16); Leverhulme Fellow (2006-08).

UNITED KINGDOM/UNITED STATES OF AMERICA
Mr A Lloyd Thomas
  
Managing Partner, Athene Capital (2014-).

UNITED STATES OF AMERICA
The Revd Dr Christopher J. Benek  

PCUSA Pastor and 2018 General Assembly Delegate; blogger at ChristopherBenek.com; 2018 Moderator of the 41 Churches of The Presbytery of Tropical Florida; techno-theologian, futurist, ethicist and speaker; founding Chair, Christian Transhumanist Association; CEO, The CoCreators Network; OpEd Writer, The Christian Post; Ph.D. student in Theology (focusing on the intersection of technological futurism and eschatology), University of Durham.
Ms Julie Brill  
Corporate Vice President and Deputy General Counsel for Global Privacy and Regulatory Affairs, Microsoft Inc. (2017-). Formerly: Partner & Co-Director, Privacy and Cybersecurity, Hogan Lovells US LLP, Washington, DC (2016-17); Commissioner, Federal Trade Commission (2010-16); Senior Deputy Attorney General and Chief of Consumer Protection and Antitrust, North Carolina Department of Justice; Lecturer-in-Law, Columbia University School of Law; Assistant Attorney General for Consumer Protection and Antitrust, State of Vermont; Associate, Paul, Weiss, Rifkind, Wharton & Garrison, New York.
Mr Kenneth Cukier  
The Economist (2003-): Senior Editor, Digital. Formerly: Wall Street Journal, Hong Kong; International Herald Tribune, Paris; Research Fellow, John F. Kennedy School of Government, Harvard University (2002-04). Co-Author, 'Big Data: A revolution that transforms how we work, live and think' (2013).
Mr Auren Hoffman  
CEO, SafeGraph, San Francisco (2016-). Formerly: co-Founder and CEO, LiveRamp (2006-15); Chairman, Stonebrick Group (2003-06); CEO, BridgePath (1998-2002); Vice President, Engineering, Human Ingenuity (1997-98).
Ms Grace Huckins  
Rhodes Scholar; Master of Neuroscience Candidate, University of Oxford; Researcher, MRC Brain Network Dynamics Unit. Formerly: Researcher, Oxford Centre for Theoretical Neuroscience and Artificial Intelligence; Researcher, Murthy Lab, Harvard University.
Ms Suzanne Johnson  
Vice President, Corporate and External Affairs, Lloyd's Register, London. Formerly: Enron, London and Houston; Manager of Fixed Income, responsible for European utilities coverage, Schroder Investment Management; Special Assistant to Ambassador Jeane Kirkpatrick, former United States Ambassador to the United Nations. A Governor of The Ditchley Foundation.
Jamie Metzl JD, PhD  
Partner and Head of Strategy & Research, Conversion Capital, New York; Senior Fellow for Technology and National Security, Brent Scowcroft Center, Atlantic Council, Washington, DC; Senior advisor, Cranemere LLC. Formerly: Executive Vice President, Asia Society; Deputy Staff Director and Senior Counselor, U.S. Senate Foreign Relations Committee; Senior Advisor to the Under Secretary for Public Diplomacy and Public Affairs, U.S. Department of State; Director, Multilateral and Humanitarian Affairs, National Security Council. Author: 'Genesis Code', 'The Depths of the Sea' and 'Eternal Sonata'; forthcoming: 'Homo Sapiens 2.0: Genetic Enhancement and the Future of Humanity'.
Mr Peter Micek  
General Counsel, Access Now (2012-); Adjunct Professor, Columbia University, School of International and Public Affairs (2014-); Advisory Board, Center for International Business and Human Rights, University of Oklahoma College of Law (2017-). Formerly: Member, World Economic Forum Global Agenda Council on the Future of Cybersecurity (2016-18); Online Editor, Your Call Radio, KALW (2007-12); Ethnic Media Monitor and Web Editor, New America Media/Pacific News Service (2003-08).
Mr George M. Newcombe  
Member, Board of Visitors, Columbia University School of Law; Board of Overseers, New Jersey Institute of Technology; Member, The American Law Institute; Advisor, American Law Institute Privacy Principles Project; Director, ConvergentAI, Inc.; Director, SightLogix, Inc. Formerly: Senior Partner, Simpson Thacher & Bartlett LLP (1983-2012). A member of the Board of Directors of The American Ditchley Foundation.
James Shinn PhD  
CEO, Predata, New York; Lecturer, School of Engineering, Princeton University. Formerly: Assistant Secretary of Defence for Asia; National Intelligence Officer for Asia, CIA; Advanced Micro Devices; co-Founder (1992), Dialogic.
Ms Nicol Turner-Lee PhD  
Fellow, Governance Studies, Center for Technology Innovation, The Brookings Institution; Visiting Scholar, Center for Gender Equity in Science and Technology, Arizona State University; U.S. Department of State Advisory Committee on International Communications and Information Policy; Appointee, Advisory Committee on Diversity and Digital Empowerment, Federal Communications Commission; contributor, TechTank. Formerly: Vice President and Chief Research and Policy Officer, Multicultural Media, Telecom and Internet Council.