Not quite the terminator? Are you sure?


I cannot remember that far in my life, but as a baby I was able to understand the concept of adding and subtracting. Children by the age of 18 months can understand the concept of increasing and decreasing. Think about the shape sorters, they disappear or think about building a tower with some cubes, it gets taller as you add more cube.

(http://www.dailymail.co.uk/sciencetech/article-1357480/How-babies-count-just-18-months-old.html)

We continually discover our world and try to make sense of it. the human intelligence tends to present its interpretation of how it perceive the world by using various systems and set of symbols. What is even more a fantastic achievement, is the fact we came to agreed on a common set of symbols and systems that we can all learnt, understand and teach to the next generation.

Think about how we measure time. First, it is believed our ancestors perceived it was dark or light. Then both part of a day  was divided in 12 equal sub-periods. The reader must wonder why 12. Then take your thumb and place it at the root of your index. Count the number of lines and repeat the same process for each finger. Which number have you calculated? Many readers should find interesting the debate provided by http://io9.com/5977095/why-we-should-switch-to-a-base-12-counting-system.

Representation

This over simplification of representing and measuring the time illustrates suitably how humans can  process information and transform them to make sense of it. One positive outcome, Homo sapiens can understand many complex concepts and  eventually communicate them. Without such system in place, the reader would not be able to understand this blog and the author would not be able to writing it.

From representing two states (true or false) to complex numbers, mathematics has become a language that the human mind use everyday to function in our modern society. For more complex situations, mathematics become appears to be more complex, but if anybody would look more closely, these concepts are inspired by what surrounds us. Graph theory draws some topologies using vertices (i.e. nodes) and edges to connect these nodes together.  Look at a web, it is quite similar.  The Internet models a spider web around us, with a network of routers interconnected.

Well-known structures such as trees, lists, networks, and queues are just examples representing more complex set of data. In geometry dimensions represent lines, areas and volume. Higher dimensions have been thought of in mathematical analysis. Matrices and vectors have been employed to explain how speed and physical structure works.  This is not an exhaustive list of representations;  the human mind has used various notation to represent the world surrounding us.

Encoding data for the future 

These representations have helped humanity to stop time, in many ways. Isaac Newton’s works would not still be studied if we would not understand his formulae. The English language may have involved, but the formulae have not. With this idea in mind, these representations can store data that can be used in the future, as the knowledge is passed from one generation to another one with the help of printed material. With the emergence of computer science,  data storage have applied many of these notations to its needs. Not only text and pictures can be stored, but also vast amount of data.

The basic data representation has two states; 0 or 1. It is perhaps fair to say that it is one of the most important data representation nowadays, as long strings of binary codes encode the data and information we used on the World Wide Web, exchange information through the Internet, in the RAM of our devices and their hard discs. From this basic data representation, complex computing processes ensure that numbers, texts, dates and time are stored electronically.

Is there some intelligence in data?

Without a process that extracts information, these data are as good as unread books. The data and information lay dormant without any use. Would it better if the computer could try to understand or perceive the essence of this data.

For a computer to make sense of the data, it needs to recognize a pattern and then design a formulae that could replicate the pattern of the data. A bit like primary school pupils drawing a straight line between points, to discover how the data is likely to progress. They will soon see visually appreciate how much faster the curve of a binomial grows compare against a linear function.

When the data is more complex, identifying patterns can be more tricky, especially if the pattern is unknown. A process of trial and error requires writing an expression and then test how well the data works. Then it is refined until the best possible formulae is found. Mathematics offers some equivalent formulae to help in this process, which is useful in certain situations. When the data have been collected through observations, these equivalent formulae may not be useful. Then the long process of finding a suitable formulae needs to be completed manually.

This process can take a lot of time. Would it better if a computer could apply this process to find symbolic regression. The computer could probably attempt more possible formulae and identified one that the human brain would have rejected. The computer has no limitation, unlike the human mind and the formulae may not be readable and understood by its human reader. The expression may become very large and the mathematical operations unusual. The chosen representation can affect dramatically the outcome. For example, trees can grow very large as new branches and leaves are added. The results may be more accurate than the ones obtained from human and found more quickly. The time may still be hours and hours of computer process.

Attempting every possible combination of operations would take a lot of time and becomes infeasible very quickly. If speed is preferred than accuracy, then random processes can explore a greater amount of expressions, but the application of probabilities cannot guarantee finding twice the same expression or finding a very accurate one all the time. Nonetheless, this state of affairs can produce interesting results for many complex problems. Thinking of it, this random process is not too far from our way of thinking. Our brain relates ideas in a huge spider web and some of them can very illogical.

Can computer be intelligent?

It is very hard to answer whether or not a computer could become intelligent. Before answering this question, it is worth asking ourselves what is meant by intelligence and adopt a definition that can be applied to a computer system. Symbolic regression extracts a mathematical expression to a certain set of data. If the learning process is implemented correctly than it becomes independent from the data set. Human intervention would need to occur to run the process though. So it is arguable the new system has become “less stupid”, but has yet to become conscious. It is not a formal evidence of computer interacting with each other, designing its own language that is understood by other computers to form social relationship. Nonetheless, it is a step in the right direction. Now a computer program can program another program.  http://www.bbc.co.uk/news/technology-34224406 explains thought the purpose of artificial intelligence and why computer have yet to become human.

https://www.youtube.com/watch?v=C-h3LtYaYeU

It is true this video shows that a computer as a human competitive results. With time these computers will progress to a much better machine. Very small robots can now communicate with each other and build others. To make the matter worse, some politicians would like to let the robots to play god. Campaigners are very active in stopping this happening ( see http://www.stopkillerrobots.org).

The concept of intelligence may be different for robots than for humans. Humans may be able to make computers and robots able to interact with each other, invent new concepts and learn new skills independently. It is perhaps time to adopt these three laws…

, ,

2 responses to “Not quite the terminator? Are you sure?”

Leave a comment