Latest News

Risks and impacts of AI: conference, open letter, and new funding program

Over the weekend of January 2, much of our research staff from the Oxford Martin Programme on the Impacts of Future Technology attended "The Future of AI: Opportunities and Challenges"(, a conference held by the Future of Life Institute to bring together AI researchers from academia and industry, AI safety researchers, lawyers, economists, and many others to discuss short and long-term issues in AI’s impact on society.

Stuart Armstrong gives a talk at TEDxVienna on the relative ease of colonising the universe

In his presentation at TEDxVienna, Stuart Armstrong argues that science fiction suffers from both an excess of imagination and a lack of it. Expansion across the universe, when it becomes possible, could be faster and bigger than we have ever imagined. Given a few assumptions, the resources of our solar system alone are more than ample to begin a direct colonisation of the entire reachable universe.

The talk can be found online at:

04/12: Eric Drexler's talk on “Intelligence Distillation”

Eric Drexler's talk on “Intelligence Distillation” with applications to reducing transitional AI risks will take place at the Future of Humanity Institute (Littlegate House, Suite 1, OX1 1 PT) on 4 December 2014, at 4 pm.


FHI contributes to UK Chief Scientific Advisor’s report

The 2014 UK Chief Scientific Advisor’s report has included a chapter on existential risk, written by Toby Ord and Nick Beckstead. The report describes the risks posed by AI, biotechnology, and geoengineering, as well as the ethical framework under which we ought to evaluate existential risk.

To read the full report, please go to:

Director Nick Bostrom's work featured in The New York Times

The New York Times mentioned work done at the Programme in a recent article about the risks of artificial intelligence. Director Nick Bostrom's analysis of some potential dangers of AI was highlighted: "Nick Bostrom, author of the book “Superintelligence,” lays out a number of petrifying doomsday settings. One envisions self-replicating nanobots, which are microscopic robots designed to make copies of themselves. In a positive situation, these bots could fight diseases in the human body or eat radioactive material on the planet. But, Mr.

17/11: Talk by Simon DeDeo on inferential self-awareness and the social effects of machine learning

Simon DeDeo is an assistant professor in Complex Systems, and faculty in Cognitive Science, at Indiana University, and is an external professor of the Santa Fe Institute. His Social Minds lab conducts research in Cognitive Science, Social Behavior, History, Economics and Linguistics; recent collaborative work includes studies of institution formation in online social worlds, the emergence of hierarchy in animal conflict, competitive pricing of retail gasoline, and parliamentary speech during the French revolution. For more information about Simon Dedeo and his research, please see:

Video: Special Lecture by Nick Bostrom on "Superintelligence: Paths, Dangers, Strategies"

On October 13th Professor Nick Bostrom presented his recent book "Superintelligence: Paths, Dangers, Strategies" at the Oxford Martin School. The lecture can be watched online at:

Carl Frey discusses his work in the Financial Times

In an article entitled 'Doing Capitalism in the Digital Age', Dr Frey discusses his recent work on the automation of jobs and how cities change as technology develops.

The full article can be found online at:

25/09: Talk by Prof Marc Lipsitch on the Ethics of Potential Pandemic Pathogens

Professor Marc Lipsitch will be giving a talk on recent experiments with potential pandemic pathogens and their ethical alternatives on September 25th. Professor Lipsitch is a professor of epidemiology and the director of the Centre for Communicable Disease Dynamics at Harvard University.

The Chronicle of Higher Education features the Programme's work on artificial intelligence

The Chronicle of Higher Education highlighted work done at the Programme in an article about the risks of artificial intelligence and other advanced technologies. In their interview, Nick Bostrom notes that “Humans have been around for over 100,000 years. During that time, we have survived earthquakes and firestorms and asteroids and all kinds of other things… It’s unlikely that any of those natural hazards will do us in within the next 100 years if we’ve already survived 100,000.