[SydPhil] Reminder: Tonight – Jaan Tallinn (Skype co-founder) on Catastrophic Risk, Sydney Ideas

Huw Price huw.price at sydney.edu.au
Tue Jul 17 15:48:41 AEST 2012


[Online version here:
http://sydney.edu.au/sydney_ideas/lectures/2012/jaan.tallinn.shtml]



INTELLIGENCE STAIRWAY Jaan Tallinn, computer programmer and founding
engineer of Skype and Kazaa’ Co-presented with the Centre for Time
<http://sydney.edu.au/centre_for_time/about/>

Jaan Tallinn, one of the founding engineers of Skype and a philosopher of
modern technology, believes the impact of artificial intelligence has
reached a crucial threshold. He argues that since evolution managed to
produce humans about 100,000 years ago, a significant phase change has now
occurred: the optimisation power of humans has exceeded that of evolution.
In short, human-driven technological progress has largely replaced
evolution as the main shaper of the future. We are witnessing a
cascade-like pattern in which we produce machines whose ability to control
the future exceeds that of the process that produced them. Tallinn calls
this pattern the "Intelligence Stairway".

It raises a major question: does the production of computers who are
smarter than their creators constitute a similar phase transition than that
when evolution produced humans? If the answer is yes, then we should treat
the possible emergence of such artificial general intelligence (AGI) as the
end of human-driven technological progress, and beginning of a new phase:
AGI-driven "intelligence explosion". In turn, this leads to an uneasy
conclusion that "intelligence explosion" might effectively manifest itself
as a sudden global ecological catastrophe.

There are two different approaches to avoiding the catastrophe: one is to
prove that the trajectory of the "intelligence explosion" will be
favourable to humanity. The other approach is to artificially limit the AIs
to narrow domains, so they would lack the ability to replace humans as the
drivers of technological progress. Jaan argues that the latter approach is
more pragmatic than the former, so we need a co-ordinated effort to
establish and enforce a "practical safety protocol" for AI developers in
order to make sure that they would not produce AGI-s by accident

[image: Jaan Tallin]

*Jaan Tallinn* is one of the programmers behind Kazaa file sharing
platform, and a founding engineer of Skype. He is a co-founder of the
Cambridge Centre for the Study of Existential Risk. He describes himself as
singularitarian/hacker/investor/physicist (in that order). In recent years
he has taken an interest in the ethical and safety aspects of artificial
intelligence, thus traveling the world and talking to different experts,
from philosophers to researchers to actual AI programmers.





EVENT DETAILS

*Date:* Tuesday 17 July
*Time:* 6.00pm to 7.30pm
*Venue:* Law School Foyer, Eastern Avenue, the University of Sydney (Click
here for venue information <http://sydney.edu.au/law/about/campus.shtml>)
*Cost:* This event is free and open to all, with no ticket or booking
required. Seating is unreserved and entry is on a first come, first served
basis.



*HUW PRICE* | ARC Federation Fellow & Challis Professor of Philosophy
Centre for Time | SOPHI | Faculty of Arts
THE UNIVERSITY OF SYDNEY

P Centre for Time, Main Quad A14, University of Sydney, NSW 2006, Australia
T +61 2 9351 4057  | F +61 2 9351 3918
E huw.price at sydney.edu.au  | W http://sydney.edu.au/centre_for_time/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.sydney.edu.au/pipermail/sydphil/attachments/20120717/aee3a5c9/attachment-0002.html>


More information about the SydPhil mailing list