And here you thought our biggest threat was Trump.
Or Putin’s strange hold over him.
Or Putin’s efforts to defeat democracy.
Or Putin’s emerging arsenal of hypersonic weapons.
If you haven’t already seen it, Henry Kissinger suggests a larger worry: how unprepared humanity is for artificial intelligence, a challenge that may be nearer than we think. From The Atlantic, in small part:
. . . On its own, in just a few hours of self-play, [AlphaZero] achieved a level of skill that took human beings 1,500 years to attain. Only the basic rules of the game were provided to AlphaZero. Neither human beings nor human-generated data were part of its process of self-learning. If AlphaZero was able to achieve this mastery so rapidly, where will AI be in five years? What will be the impact on human cognition generally? What is the role of ethics in this process, which consists in essence of the acceleration of choices? . . .
. . . The Enlightenment started with essentially philosophical insights spread by a new technology [printing]. Our period is moving in the opposite direction. It has generated a potentially dominating technology in search of a guiding philosophy. Other countries have made AI a major national project. The United States has not yet, as a nation, systematically explored its full scope, studied its implications, or begun the process of ultimate learning. This should be given a high national priority, above all, from the point of view of relating AI to humanistic traditions.
AI developers, as inexperienced in politics and philosophy as I am in technology, should ask themselves some of the questions I have raised here in order to build answers into their engineering efforts. The U.S. government should consider a presidential commission of eminent thinkers to help develop a national vision. This much is certain: If we do not start this effort soon, before long we shall discover that we started too late.
Dr. Kissinger turns 95 Sunday but, more than most, is thinking ahead.