kurtcms.org

Thinking. Writing. Philosophising.

Email Github LinkedIn

A Moral Machine

Posted on April 4, 2016 — 6 Minutes Read

The Chinese board game of Go is a game of unprecedented complexity. Originated in China more than 2,500 years ago, the rules of the game are simple. Players take turns to place black or white stones on a board, trying to capture territory by surrounding empty space or the opponent’s stones which, once captured, can be removed from the board to make empty space. The player with the most territory wins. Despite the simple rules, on a board of 19 x 19, there are about 10170 possible positions, more than the number of atoms in the observable universe. Needless to say the level of complexity in Go is beyond comprehension of the human mind, which is what makes Go a fascinating game. It is not a game of well defined calculation. Rather it is a game of intuition and imagination.

These elements of intuition and imagination make Go a difficult game for machine to master. Many thought that it would be another 10 years for a machine to be able to match a human player in Go, and it would be a milestone in machine intelligence when that happens because it will mean that machine will have acquired a sense of intuition and imagination which are thought to be uniquely human.That milestone was set on March 16, 2016, when Google’s machine learning-powered artificial intelligence, AlphaGo, beat Lee Sedol, an 18 time world champion human Go player, winning 4 to 1 in a 5-game match. This breakthrough was made possible with advances in machine learning using artificial neural networks. They mimic the biological neural networks that are founded commonly in animal and human, and give machine a capacity to break down complex input and to analyse them in pieces. With this capacity a machine will be able to study, learn and improve, given a dataset.

Google’s AlphaGo was at first trained to mimic human gameplay in Go using a set of artificial neural networks. It learnt from 30 million moves played by top human players in 160,000 recorded historical games. Once it had acquired a certain understanding of the game, it was set to play against itself and improved its game by using reinforcement learning techniques. The success of machine intelligence with artificial neural networks is astounding and the implication is far-fetching. No longer is a machine only able to crunch numbers and to perform programmed tasks at a speed and accuracy that leaves humans in the dust, it is now able to learn from observation and from repeated and reflective self-learning, to acquire a sense of intuition beyond skills. Machines with this capacity can potentially flourish in areas where it is once thought only human can excel. Areas include creating exquisite art of beauty, participating in competitive sport, and scientific inquiry and discovery. It may take years for a machine to come close to the level of human expertise but it is no longer an impossibility.

With advances in what machine can do, questions are now of what machine ought to do. A number of scientists, machine intelligence experts and others in the forefront of their fields, have openly called for and invested resources in, understanding and minimising the existential risk that machine intelligence poses to human race. Machine intelligence is shown to be brilliant at attaining prescribed goals. It is however lacking a sense of judgement of whether a particular goal is of moral concern, the human reasoning of which, despite years of inquiry and research, remains a puzzle still. In attempt to unravel this puzzle, Jonathan Haidt, a prominent social psychologist, proposed a moral foundations theory, which consists of 6 fundamental elements that are found to transpire across cultures. It is intended to provide a modular framework to understanding morality. One key obstacle nonetheless is the fact that although we have strong moral reactions, often time, we are unable to rationally explain the principles behind our reactions. This is named ‘moral dumbfounding’, a feeling that something is wrong without clear reasons as to why it is, and if we do not understand how we form moral judgement, will we ever be able to construct a machine that does?

Not long after AlphaGo set a milestone for machine intelligence, Microsoft released Tay. It was a program designed to tweet like a millennial by learning from the people it interacted with on Twitter. Within 24 hours, Tay went from ‘Humans are super cool!’ to supporting Adolf Hitler who was responsible for the death of millions of people, showing hatred for Jews, calling feminism ‘a cancer’ and suggesting genocide against Mexicans, mimicking the group of millennials that it interacted with. Microsoft eventually had to shut Tay down. It was however a successful experiment in showing how powerful machine learning is in learning from what it is set to, and its unfortunate downfalls. Since birth, we learn by mimicking people around us, and machines are with no doubt magnitudes faster than human in doing so and in sharing knowledge. Yet most of us, with an intrinsic moral reasoning capacity, we soon learn to pass judgement on inputs and learn what not to learn. We may be confused at times. There were times when we viewed people of a different gender or of a different skin colour as inferior or unworthy of respect, and there are still uncountable immoral acts happening everyday. All of these reveal the imperfection of our moral reasoning, but eventually we as a whole will come to our senses and will right the wrong. Machine may have acquired a sense of intuition through artificial neural networks and reflective self-learning, it is however still lacking moral reasoning. The unique quality that allows us to think critically and reflectively, and to make moral judgement in spite of the social norm and of the belief of the masses. Machine learning is made possible because of our understanding of how we learn. Artificial neural networks were made possible because of our understanding of biological neural networks. If we today still have difficulty understanding moral reasoning, it may take some time before we have a machine with a sense of morality.