How will artificial intelligence affect our lives in the next ten years?

The primary focus of this essay is the future of Artificial Intelligence (AI). In order to better understand how AI is likely to grow I intend to first explore the history and current state of AI. By showing how its role in our lives has changed and expanded so far, I will be better able to predict its future trends.

John McCarthy first coined the term artificial intelligence in 1956 at Dartmouth College. At this time electronic computers, the obvious platform for such a technology were still less than thirty years old, the size of lecture halls and had storage systems and processing systems that were too slow to do the concept justice. It wasn’t until the digital boom of the 80’s and 90’s that the hardware to build the systems on began to gain ground on the ambitions of the AI theorists and the field really started to pick up. If artificial intelligence can match the advances made last decade in the decade to come it is set to be as common a part of our daily lives as computers have in our lifetimes. Artificial intelligence has had many different descriptions put to it since its birth and the most important shift it’s made in its history so far is in how it has defined its aims. When AI was young its aims were limited to replicating the function of the human mind, as the research developed new intelligent things to replicate such as insects or genetic material became apparent. The limitations of the field were also becoming clear and out of this AI as we understand it today emerged. The first AI systems followed a purely symbolic approach. Classic AI’s approach was to build intelligences on a set of symbols and rules for manipulating them. One of the main problems with such a system is that of symbol grounding. If every bit of knowledge in a system is represented by a set of symbol and a particular set of symbols (“Dog” for example) has a definition made up of a set of symbols (“Canine mammal”) then the definition needs a definition (“mammal: creature with four limbs, and a constant internal temperature”) and this definition needs a definition and so on. When does this symbolically represented knowledge get described in a manner that doesn’t need further definition to be complete? These symbols need to be defined outside of the symbolic world to avoid an eternal recursion of definitions. The way the human mind does this is to link symbols with stimulation. For example when we think dog we don’t think canine mammal, we remember what a dog looks like, smells like, feels like etc. This is known as sensorimotor categorization. By allowing an AI system access to senses beyond a typed message it could ground the knowledge it has in sensory input in the same manner we do. That’s not to say that classic AI was a completely flawed strategy as it turned out to be successful for a lot of its applications. Chess playing algorithms can beat grand masters, expert systems can diagnose diseases with greater accuracy than doctors in controlled situations and guidance systems can fly planes better than pilots. This model of AI developed in a time when the understanding of the brain wasn’t as complete as it is today. Early AI theorists believed that the classic AI approach could achieve the goals set out in AI because computational theory supported it. Computation is largely based on symbol manipulation, and according to the Church/Turing thesis computation can potentially simulate anything symbolically. However, classic AI’s methods don’t scale up well to more complex tasks. Turing also proposed a test to judge the worth of an artificial intelligent system known as the Turing test. In the Turing test two rooms with terminals capable of communicating with each other are set up. The person judging the test sits in one room. In the second room there is either another person or an AI system designed to emulate a person. The judge communicates with the person or system in the second room and if he eventually cannot distinguish between the person and the system then the test has been passed. However, this test isn’t broad enough (or is too broad…) to be applied to modern AI systems. The philosopher Searle made the Chinese room argument in 1980 stating that if a computer system passed the Turing test for speaking and understanding Chinese this doesn’t necessarily mean that it understands Chinese because Searle himself could execute the same program thus giving the impression that he understand Chinese, he wouldn’t actually be understanding the language, just manipulating symbols in a system. If he could give the impression that he understood Chinese while not actually understanding a single word then the true test of intelligence must go beyond what this test lays out.

Today artificial intelligence is already a major part of our lives. For example there are several separate AI based systems just in Microsoft Word. The little paper clip that advises us on how to use office tools is built on a Bayesian belief network and the red and green squiggles that tell us when we’ve misspelled a word or poorly phrased a sentence grew out of research into natural language. However, you could argue that this hasn’t made a positive difference to our lives, such tools have just replaced good spelling and grammar with a labour saving device that results in the same outcome. For example I compulsively spell the word ‘successfully’ and a number of other word with multiple double letters wrong every time I type them, this doesn’t matter of course because the software I use automatically corrects my work for me thus taking the pressure off me to improve. The end result is that these tools have damaged rather than improved my written English skills. Speech recognition is another product that has emerged from natural language research that has had a much more dramatic effect on people’s lives. The progress made in the accuracy of speech recognition software has allowed a friend of mine with an incredible mind who two years ago lost her sight and limbs to septicaemia to go to Cambridge University. Speech recognition had a very poor start, as the success rate when using it was too poor to be useful unless you have perfect and predictable spoken English, but now its progressed to the point where its possible to do on the fly language translation. The system in development now is a telephone system with real time English to Japanese translation. These AI systems are successful because they don’t try to emulate the entire human mind the way a system that might undergo the Turing test does. They instead emulate very specific parts of our intelligence. Microsoft Words grammar systems emulate the part of our intelligence that judges the grammatical correctness of a sentence. It doesn’t know the meaning of the words, as this is not necessary to make a judgement. The voice recognition system emulates another distinct subset of our intelligence, the ability to deduce the symbolic meaning of speech. And the ‘on the fly translator’ extends voice recognitions systems with voice synthesis. This shows that by being more accurate with the function of an artificially intelligent system it can be more accurate in its operation.

Artificial intelligence has reached the point now where it can provide invaluable assistance in speeding up tasks still performed by people such as the rule based AI systems used in accounting and tax software, enhance automated tasks such as searching algorithms and enhance mechanical systems such as braking and fuel injection in a car. Curiously the most successful examples of artificial intelligent systems are those that are almost invisible to the people using them. Very few people thank AI for saving their lives when they narrowly avoid crashing their car because of the computer controlled braking system.

One of the main issues in modern AI is how to simulate the common sense people pick up in their early years. There is a project currently underway that was started in 1990 called the CYC project. The aim of the project is to provide a common sense database that AI systems can query to allow them to make more human sense of the data they hold. Search engines such as Google are already starting to make use of the information compiled in this project to improve their service. For example consider the word mouse or string, a mouse could be either a computer input device or a rodent and string could mean an array of ASCII characters or a length of string. In the sort of search facilities we’re used to if you typed in either of these words you would be presented with a list of links to every document found with the specified search term in them. By using artificially intelligent system with access to the CYC common sense database when the search engine is given the word ‘mouse’ it could then ask you whether you mean the electronic or furry variety. It could then filter out any search result that contains the word outside of the desired context. Such a common sense database would also be invaluable in helping an AI pass the Turing test.

So far I have only discussed artificial systems that interact with a very closed world. A search engine always gets its search terms as a list of characters, grammatical parsers only have to deal with strings of characters that form sentences in one language and voice recognition systems customise themselves for the voice and language their user speaks in. This is because in order for current artificial intelligence methods to be successful the function and the environment have to be carefully defined. In the future AI systems will to be able to operate without knowing their environment first. For example you can now use Google search to search for pictures by inputting text. Imagine if you could search for anything using any means of search description, you could instead go to Google and give it a picture of a cat, if could recognise that its been given a picture and try to assess what it’s a picture of, it would isolate the focus of the picture and recognise that it’s a cat, look at what it knows about cats and recognise that it’s a Persian cat. It could then separate the search results into categories relevant to Persian cats such as grooming, where to buy them, pictures etc. This is just an example and I don’t know if there is currently any research being done in this direction, what I am trying to emphasise in it is that the future of AI lies in the merging existing techniques and methods of representing knowledge in order to make use of the strengths of each idea. The example I gave would require image analysis in order to recognise the cat, intelligent data classification in order to choose the right categories to sub divide the search results into and a strong element of common sense such as that which is offered by the CYC database. It would also have to deal with data from a lot of separate databases which different methods of representing the knowledge they contain. By ‘representing the knowledge’ I mean the data structure used to map the knowledge. Each method of representing knowledge has different strengths and weaknesses for different applications. Logical mapping is an ideal choice for applications such as expert systems to assist doctors or accountants where there is a clearly defined set of rules, but it is often too inflexible in areas such as the robotic navigation performed by the Mars Pathfinder probe. For this application a neural network might be more suitable as it could be trained across a range of terrains before landing on Mars. However for other applications such as voice recognition or on the fly language translation neural networks would be too inflexible, as they require all the knowledge they contain to be broken down into numbers and sums. Other methods of representing knowledge include semantic networks, formal logic, statistics, qualitative reasoning or fuzzy logic to name a few. Any one of these methods might be more suitable for a particular AI application depending on how precise the effects of the system have to be, how much is already known about the operating environment and the range of different inputs the system is likely to have to deal with.

In recent times there has also been a marked increase in investment for research in AI. This is because business is realising the time and labour saving potential of these tools. AI can make existing applications easier to use, more intuitive to user behaviour and more aware of changes in the environment they run in. In the early day of AI research the field failed to meet its goals as quickly as investors believed it would, and this led to a slump in new capital. However, it is beyond doubt that AI has more than paid back its thirty years of investment in saved labour hours and more efficient software. AI is now a top investment priority, with benefactors from the military, commercial and government worlds. The pentagon has recently invested $29m in an AI based system to assist officers in the same way as a personal assistant normally would.

Since AI’s birth in the fifties it has expanded out of maths and physics into evolutionary biology, psychology and cognitive studies in the hope of getting a more complete understanding of what makes a system, whether it be organic or electronic, an intelligent system. AI has already made a big difference to our lives in leisure pursuits, communications, transportation, sciences and space exploration. It can be used as a tool to make more efficient use of our time in designing complex things such as microprocessors or even other AI’s. In the near future it is set to become as big a part of our lives as computer and automobiles did before it and may well begin to replace people in the same way the automation of steel mills did in the 60’s and 70’s. Many of its applications sound incredible, robot toys that help children to learn, intelligent pill boxes that nag you when you forget to take your medication, alarm clocks that learn your sleeping habits or personal assistants that can constantly learn via the internet. However many of its applications sound like they could lead to something terrible. The pentagon is one of the largest investors in artificial intelligence research worldwide. There is currently much progressed research into AI soldier robots that look like small tanks and assess their targets automatically without human intervention. Such a device could also be re-applied as cheap domestic policing. Fortunately the dark future of AI is still a Hollywood fantasy and the most we need to worry about for the near future is being beaten at chess by a children’s toy.

 

Your rating: None Average: 4.3 (13 votes)