This is not about the potential of Android phones to facilitate voting. It’s about whether the other kind of android—the artificially intelligent robot type—would not only have the abstract right to vote, but would have the potential to be a full participant in the electoral process. To “electioneer” means to actively participate in campaigning (or at least passively do so through the wearing of buttons or whatnot–but let’s talk about active campaigning).
More than a bit has been written about android voting rights. The basic argument is based not simply on AI robots becoming more like humans, but on the progressively thinning line between robots and humans, in bodies as well as minds. But the position on the robot mind is still important. Leading AI scientist Ray Kurzweil says robots will gain “consciousness” by 2029. Elon Musk wants to ensure that such conscious artificial beings are “friendly.” Perhaps extending voting rights would be an acceptable olive branch. Whatever one considers to be the main arguments for and against AI robot suffrage, we’ve heard the case made.
But there are two other questions. The less predictable one, but more relevant to those of us who work in campaigns, is whether AI robots would be involved in campaigning in addition to simply voting. It’s an extension of that right, though: although there are instances (including the increasingly controversial policies of some states to disenfranchise felons) in which one may not vote but still might choose to campaign, philosophically the right to vote generates the enthusiasm and interest of the citizen in campaigning. Will campaigns be able to hire robot consultants? Welcome robot volunteers?
The immediate objection is that an AI unit with a quantum processor could make all sorts of predictions pertaining to voter geographies or demographics. It could also develop strategic microtargeting algorithms similar to those practiced by politicians around the world since 2016, techniques that have basically bypassed the deliberative process of campaigning to spread negative messaging, disinformation, and more, overwhelming the ability of fact-checkers to scrutinize campaign messages. Of course, if nations were to pass laws prohibiting data-driven social media microtargeting, they would presumably also prohibit robots from doing it. Other than that scenario, we may be looking to a future where candidates could recruit armies of robot volunteers who could go door-to-door without getting tired, keep making phone calls without wanting to stab themselves in the ears after 2 hours, and so on.
The more predictable question is whether androids could run for office. Isaac Asimov published a short story, “Evidence,” about allegations that a politician and mayoral candidate named Stephen Byerly is actually a robot. But Asimov doesn’t resolve the question of whether the laws existing in his universe (which prohibit robots running in elections) are just or unjust. Instead, Byerly’s identity is never completely resolved, but some characters speculate that android elected officials would not be a bad thing.
Of course Asimov’s three laws of robotics effectively undermine any feasible scenario of robots running for office. In particular, the imperative that a robot must obey human beings’ orders would strip a robotic leader of any effective agency or leadership ability. At the very least, any scenario involving robots as elected officials requires jettisoning Asimov’s laws. And I suspect that the “following orders” law will be hard for those of us in the real world to let go of as we move closer to autonomous AI. It appears more likely that robots, when they develop consciousness, will make themselves useful helping human candidates win, rather than trying to win office themselves–unless some kind of robot-proletarian revolt comes to pass.