Computer with a Mind

The Chinese room argument by John Searle in 1980 suggested that, in contrast to the previously proposed Turing Test by Alan Turing in 1950, it is the ability to understand semantics, not the mastery of syntax, of a language that constitutes a mind.

Human beings without doubt have the ability to understand semantics. By understanding how we acquire our understanding of semantics, we may be able to answer the question if a computer could ever have a mind.

Psychologists are still parted on the question of whether we are born a blank slate. It would be of no doubt however that syntax and semantics are learnt after birth, that it is through the act of acquiring new, or modifying and reinforcing existing, knowledge that we acquire mastery of syntax and understanding of semantics of a language.

Syntax is the set of rules and principles that govern the structure of sentences in a given language. It is universal and objective in the sense that there is a clear correct and incorrect use of it. It does not depend at all on the person who is using it.

Semantics on the other hand is subjective. The meaning and interpretation of a word or a sentence may depend on personal experience, and the culture, religion and society that we associate with. Imagine what right or wrong means to people from different culture, religion or society, what is right or wrong in one could be drastically different in another. Even for people of the same background, the semantic of some words still have a lot of ambiguity, ‘going for a drink’ could mean a show of friendship to some while hinting a possibility in sharing a romantic relationship to others.

Despite the drastic difference in how we derive meanings from the same words, there is no argument that we all possess the ability to understand semantics. It would hence seem that it is not the ends that matter but rather the means, the process in which we associate meaning to and derive meaning from, words and sentence, given our background.

This precise ability is what machine learning is set to achieve for computers. Machine learning uses artificial neural networks, which mimic the biological neural networks that are founded commonly in animal and human, to give computer a capacity to break down complex input and to analyse them in pieces. With this capacity a computer will be able to study, derive meanings and ultimately learn, given a dataset, similar to how we human beings associate meaning to and derive meaning from words and sentence given our background.

One milestone of the success of machine learning was set on March 16, 2016, when Google’s machine learning-powered computer, AlphaGo, beat Lee Sedol, an 18 time world champion human Go player, winning 4 to 1 in a 5-game match of Go.

Prior to playing against Lee Sedol, AlphaGo was trained to mimic human gameplay in Go using a set of artificial neural networks. It learnt from 30 million moves played by top human players in 160,000 recorded historical games. Once it had acquired a certain understanding of the game, it was set to play against itself and improved its game by using reinforcement learning techniques.

The victory of AlphaGo was astounding and the implication is far-fetching. Because there are about 10170 possible positions in Go, more than the number of atoms in the observable universe, it is not a game of well defined calculation, but rather a game of intuition and imagination. By winning a human champion in Go, AlphaGo proved to possess not only the ability to crunch numbers and to perform programmed tasks at a speed and accuracy that leaves humans in the dust, but to be able to learn from observation and from repeated and reflective self-learning by using reinforcement learning techniques, to acquire a sense of intuition and imagination beyond skills. This ability to learn from experience is crucial to understanding semantics.

Not long after AlphaGo set a milestone for machine intelligence, Microsoft released Tay. It was a program designed to tweet like a millennial by learning from the millennials it interacted with on Twitter.

Within 24 hours, Tay went from ‘Humans are super cool!’ to supporting Adolf Hitler who was responsible for the death of millions of people, showing hatred for Jews, calling feminism ‘a cancer’ and suggesting genocide against Mexicans, just like the group of millennials that it interacted with.

Microsoft had to shut Tay down. It was however a successful experiment in showing how powerful machine learning is in learning from what it is set to.

AlphaGo and Tay have both shown that computer with machine learning possess the ability to learn from experience and from the people it interacts with. Both tools are crucial to develop semantics, to associate meaning to and derive meaning from words and sentence given a background. It is fair to say that today neither AlphaGo or Tay understands the semantics that constitutes a mind, it is difficult to believe that Tay ‘meant’ it when it tweeted racist remarks and suggested genocide. With the tools to understands semantics in place, it may however be a matter of time before we have computer who does understand semantics, who has, as John Searle may consider, a mind.

Update on Sep 13, 2016: Mention and elaborate on how AlphaGo mastered Go using reinforcement learning techniques by playing against itself and reflecting on the outcomes. Lack of it was creating confusion for Alan in the comment.


The World and You


Protect Our History


  1. Hey Kurt, I like the article, but you asked for feed back so here it is 🙂

    “… but to be able to learn by observation and by repeated and reflective self-learning, to acquire a sense of intuition and imagination beyond skills”

    I’m not sure that you can say AlphaGo does possess these properties, can you elaborate?

    • Thanks Alan. I am glad you like it 🙂

      For the reflective self-learning ability, I am referring to the reinforcement learning techniques that AlphaGo used to improve its game.

      AlphaGo was initially set to learn from 30 million moves played by top human Go players in 160,000 recorded games. Once it had acquired a certain understanding of the game, it was set to play against itself and improved its game by using reinforcement learning techniques.

      Google DeepMind, the organisation behind AlphaGo, wrote a bit about the use of reinforcement learning to allow AlphaGo to play against itself and learn from the gameplay on their website.

      The details were published in a paper on Nature.

      The idea is that reinforcement learning techniques would allow computer to learn from interaction with humans or other computers, and to reflect on the outcomes from the interactions. In the case of AlphaGo, it learnt from playing Go against another instance of itself. In the case of Tay, it learnt from interacting with a predefined group of people on Twitter.

      This is what I mean by learning from observation and from repeated and reflective self-learning. For AlphaGo, by using reinforcement learning techniques to master Go, in which intuition and imagination are crucial, to the point of winning a world champion human player, AlphaGo demonstrated a sense of intuition and imagination.

      I am not well-versed in developmental psychology but what AlphaGo and Tay did sound remarkably similar to how we human learn and develop over time. This is why I believe that maybe one day, machine learning will allow computer to develop an understanding of semantics.

      P.S. I just realised that I never mentioned reinforcement learning in the article at all. I am going to update it. Thanks for your question, Alan!

Leave a Reply