kurtcms.org

Thinking. Writing. Philosophising.

Email Github LinkedIn

Artificial Intelligence and Responsibility

Posted on June 3, 2018 — 2 Minutes Read

Not to be mistaken for the name of the latest gadget, Google Shoots is the name of a video featuring Google Assistant being commanded to fire a handgun at an object of innocence. The choice of the virtual assistant device was arbitrary. It could well have been any others that accept voice command or other forms of input. Raised again are talks of the threats of the rise of artificial intelligence, and that comes the day when machine with access to life threatening apparatus, which could well be simple household appliance or a motor vehicle, and needs not to be firearms or weapon of mass destruction, activates such device on others that results in a loss of life, on ground of a misguided anticipation of its user’s needs and desires, who, if anyone, is then responsible for such tragedy?

While it is true that machine learning enables artificial intelligence to learn on its own and accomplish wonders, it is after all a machine of design and engineering. If a crime is commanded directly by a culprit, there is no difference whether the instrument of choice is a voice assistant or a knife at hand. If however a machine acts out of a misguided anticipation without being directly commanded, like any flawed or malfunctioned equipment, its designers are at least partly responsible for the accident, and the exact degree of blame, like any other accident involving flawed or malfunctioned equipment, depends on the specifics. Most would agree that things are rather different between drowning in a bathtub and being killed in an explosion of a leaky gas stove. The possibility of AI apocalypse remains still, but having machine of consciousness and capable of acting on its own free will is quite a different story.