The Peter Parker Principle is the name given to society’s acknowledgment of that immortal quote from Amazing Fantasy #15, the origin of Spider-Man: “With great power there must also come—great responsibility.” The quote even appeared in a United States Supreme Court decision. President Obama used it in 2010.
We may need to revive the quote and principle again, in light of some recent weirdness around cyber-warfare and fears of artificial intelligence: if we aren’t in the “Brave New World” now, I’d certainly love to see where that threshold is crossed. With Google now claiming to process information at hitherto impossible speeds via quantum computing, we have to be a little scared of offensive cyber operations, or the potential of computer autonomy, right?
We can certainly be concerned about the cyber-ops. The Trump administration has authorized a vague program with no definitions, no indication of what threats exist, or even of what constitutes a threat. The administration won’t say what threats the program exists to counter, but the policy “eases the rules on the use of digital weapons,” and this is a significant departure from traditional defensive cyber-ops—operations that, according to the Cato Institute’s Brandon Valerio and Benjamin Jackson, worked to stop or deter cyberattacks (as much as such a thing is possible) without risking escalation. The authors call the previous approach “low-level counter-responses” that do not increase the severity of inflicted damage.
That’s a fascinating thing, in a way, that previous administrations had the consciousness to limit their responses, perhaps because they knew that once you escalate, that escalation will come right back at you. The authors actually analyzed several operations and were able to classify non-escalation and escalation scenarios, concluding that “active defense” rather than offense was the most effective and escalation-avoidant framework.
Second, we have what we could perhaps call “Elon’s Paradox”—that in the face of alleged threats to human autonomy from artificial intelligence, the solution may be to preemptively merge humans with AI technology, cybernetically. Musk isn’t alone in his criticism and fears of AI; the late Steven Hawking and others have long sounded the alarm on AI becoming, as Vladimir Putin recently speculated, a servant to the leader that ends up ruling the world. Musk is afraid it will cause World War Three.
But Musk’s solution—he wants to increase cybernetic connections between humans and machines in hopes that a merger will be more coequal than AI just outright taking over—seems a little weird. Granted, the technology of projects like Neuralink has tremendous potential to help people heal from brain damage or degeneration, and it’s a fascinating question whether systems can be developed that retain human autonomy while utilizing the potential of AI.
But it’s not clear how it will prevent the emergence of what Musk calls “godlike superintelligence,” and besides, even leaving that kind of control up to humans is a mixed bag. After all, Google had been supplying technology to the military for drone strikes, promised to stop, and then hedged on its promise. With great power—hopefully—comes great responsibility.