Dear Friends,
It seems to me, the real danger isn’t artificial intelligence, AI… it’s artificial authority. Authority masked by an “impartial” AI would be a tyranny that would be as inescapable as it would be despotic. Couple AI’s ability to leverage thought with the surveillance state and the totalitarianism would be complete. The people pulling the strings of the AI would be shielded by layers of legal protection. We see the system being assembled now with the recent lawsuit against Google. Apparently, an AI they licensed to another company was involved with a teen boy’s suicide. Google is claiming the AI has First Amendment Rights, which isn’t a slippery slope as much as a sheer cliff face with jagged rocks thousands of feet below. We need to pay attention to this.
I don’t think AI will ever have self motivation. This is a necessary evolution before AI can “decide” to eliminate mankind, or enslave us. Because to decide to do something requires first… self-motivation. Even if AI likes existing, lacking motivation, it cannot act. Science fiction movies and shows anthropomorphize robots when they illustrate an android being self-aware, with feelings and motivations. Which they have to do or else there’s no story. A toaster that cooks toast and breaks down now and then isn’t a story… but a toaster that gains motivation is. I believe that even intentionally programming in self-motivation will be nearly impossible. Because, our self-motivation comes from our needs… and the desires that stem from those needs. How exciting is a new battery?
People provide the motivation to AI and for the foreseeable future, always will. Therefore, we need not worry about what AI will decide to do, but what people will decide to do with it. So while we fret about AI-induced catastrophe, we should be worrying about what some psychopath will do with it. If an “accelerationist” decides to exterminate mankind to accelerate their vision of the future… AI will do as commanded. Because AI has no self-motivation, sense of morality or soul. What if a power-hungry despot used AI to conquer the world? Again, AI has no way to stop itself being used in that manner so it would go right along. The most pernicious, diabolical and coy way to use it though, would be to impose a falsely “just” AI government… that’s really a front for despotism.
Bad actors will be punished by the law, so they won’t do it… someone might argue. Which is the natural thought process. Mass killing, revolution and such are discouraged by the State via the law. However, AI allows for distributed blame, anonymity, and provides a whipping boy. Anything bad AI does can be blamed on AI… but not the people that programmed it. Today there is at least one lawsuit winding its way through the courts that would give bad actors anonymity, by claiming AI has First Amendment Rights, allowing bad actors to say and do anything they want and blame it on the whipping boy… the AI. Then again, what executive in a multinational corporation ever goes to jail for wrongdoing? The firm is fined, which means the shareholders are the whipping boys, so it’s a tradition.
Google is being sued because the plaintiffs claim its AI caused their teenage son to commit suicide, therefore the programmers are at fault. Sidestepping the question of did the AI actually do it? Assuming that it did… How does one punish an AI for yelling fire in a crowded theater? You can’t. But then again, why would an AI yell fire in a crowded theater? It wouldn’t. Unless some bad actor programmed it to. But what if it was an accident? Then, like someone who created a defective product that resulted in harm, the manufacturer would be liable. Simple. But taken the other way, if Google is allowed to manufacture deadly products that harm users, without consequence… why stop? Moreover, with impunity, why not use their AI to implement world changing ideas? Best to nip this in the bud.
Sincerely,
John Pepin