kurtcms.org

Thinking. Writing. Philosophising.

Email Github LinkedIn

Computer with a Mind

Posted on September 12, 2016 — 5 Minutes Read

The Chinese room argument by John Searle in 1980 suggested that, in contrast to the previously proposed Turing Test by Alan Turing in 1950, it is the ability to understand semantics, not the mastery of syntax, of a language that constitutes a mind. Notwithstanding that if such a notion permits conceptualisation and if so what precisely constitutes a mind other than the ability to understand semantics are still of philosophical debate, if a grasp of semantics is a crucial component to the notion of mind, by understanding how we acquire and evolve our understanding of semantics, it may shed light on the question of whether a computer has and if not could it ever have a mind.

Psychologists are still parted on the question of whether we are born a blank slate. It would be of no doubt however that for the large part syntax and semantics are learnt after birth, that it is through the act of acquiring new, or modifying and reinforcing existing knowledge that we acquire mastery of syntax and understanding of semantics of a language. Being the set of rules and principles that govern the structure of sentences in a given language, syntax is universal and objective in the sense that there is a clear correct and incorrect use of it. It does not depend at all on the person who is using it. Semantics on the other hand is rather subjective, in the sense that the meaning and interpretation of a word or of a sentence may depend on personal experience, and on the culture, religion and society that we associate with. What right or wrong means could be drastically different from one culture, religion or society to another. Even for people of the same background, the semantic of some words still have a lot of ambiguity, an invitation to sharing a drink or meal could mean a show of friendship to some, while hinting a possibility in sharing a romantic relationship to others.

Despite the drastic difference in how we derive meanings from the same words, there is no argument that we all possess the ability to understand semantics. It would hence seem that it is not the ends that matter but rather the means, that is the process in which we associate meaning to and derive meaning from, words and sentence, given our background. This precise ability is what machine learning is set to achieve for computers. Machine learning uses thoroughly designed models and algorithms such as artificial neural networks, which mimic the biological neural networks that are founded commonly in animal and human, to give computer a capacity to break down complex input and to analyse them in pieces. With this capacity a computer will be able to study, derive meanings and ultimately learn patterns, given a sizable and coherent dataset, similar to how we human beings associate meaning to and derive patterns from words and sentence given a context and background.

One milestone of the success of machine learning was set on March 16, 2016, when Google’s machine learning-powered computer, AlphaGo, beat Lee Sedol, an 18 time world champion human Go player, winning 4 to 1 in a 5-game match of Go. Prior to playing against Lee Sedol, AlphaGo was trained to mimic human gameplay in Go using a set of artificial neural networks. It learnt from 30 million moves played by top human players in 160,000 recorded historical games. Once it had acquired a certain understanding of the game, it was set to play against itself and improved its game by using reinforcement learning techniques. The victory of AlphaGo was astounding and the implication is far-fetching. Because there are about 10^170^ possible positions in Go, that is more than the number of atoms in the observable universe, it is not a game of well defined calculation, but rather a game of intuition and imagination. By winning a human champion in Go, AlphaGo proved to possess not only the ability to crunch numbers and to perform programmed tasks at a speed and accuracy that leaves humans in the dust, but to be able to learn from observation and from repeated and reflective self-learning by harnessing the power of artificial neural networks and reinforcement learning techniques, to acquire a sense of intuition and imagination beyond skills. This ability to learn from experience is foundational to grasping semantics.

While a bright future machine learning might hold, not long after AlphaGo set a milestone for machine intelligence, Microsoft released Tay. It was a program designed to tweet like a millennial by learning from the millennials it interacted with on Twitter. Within 24 hours, Tay went from ‘Humans are super cool!’ to supporting Adolf Hitler who was responsible for the death of millions of people during the Second World War, showing hatred for Jews, calling feminism ‘a cancer’ and suggesting genocide against Mexicans, mimicking the group of millennials that it interacted with. Microsoft eventually had to shut Tay down. It was however a successful experiment in showing how powerful machine learning is in learning from what it is set to, and its unfortunate downfalls.

AlphaGo and Tay have both shown that computer with machine learning possess the ability to learn from experience and from the people it interacts with. Both tools are crucial to develop semantics, to associate meaning to and derive patterns from words and sentence given a context and background. It is nonetheless fair to say that neither AlphaGo nor Tay understood the semantics that constitutes a mind, It is difficult to believe that Tay ‘meant’ it when it tweeted racist remarks and suggested genocide, or that AlphaGo understood what it meant to be beating a human world champion in one of the most intrigue games known to human. With the proper tools to understands semantics in place though, given time and technological advancement, it may simply be a matter of time before we have computer who does understand semantics, who has, as John Searle may consider, a mind.