by Ivor Hartmann

From Issue 11 (July 2011)

IN TODAY’S MODERN SOCIETY, Artificial Intelligences are nearly all-pervasive. The odds are that you personally interact daily with some form of AI, be it a call centre program, automatic car transmission, video game, Google search, email spam filter, or a computer of any type – are pretty high. However, the realisation of a true AI, in terms of matching, and exceeding, human intelligence and characteristics such as emotion, creativity, and social intelligence, etc., would seem to be as far away as we are to living on another planet.

Although the dream of true AI has long been a staple of science fiction, the idea itself predates SF considerably. In 300 BC Apollonius Rhodius wrote Argonautica, one of the characters of which, Talos, is a giant bronze artificial man created by Hephaestus and Cyclopes and given to Minos; though, even then, the myth of Talos far preceded Rhodius’s epic poem. So the quest in both literature, and the forebear of science, alchemy, to create an AI has been with us for thousands of years. Yet it was only with the advent of digital computing, starting in 1948 with the Manchester Small-Scale Experimental Machine, the world’s first stored-program computer, that AI had the first real start at becoming a reality.

From 1956 AI progressed — comparative to the preceding thousands of years— extremely rapidly. This was mainly due to the month-long Dartmouth Summer Research Conference on Artificial Intelligence. This landmark conference was initiated by John McCarthy (a computer scientist who in fact coined the term Artificial Intelligence), and was supported by Claude Shannon (an electronic engineer, cryptographer, and mathematician, pioneer of information theory and applied game theory amongst many other achievements), Marvin Minsky (a cognitive scientist and inventor who later with McCarthy founded MIT’s Computer Science and Artificial Intelligence Laboratory), and Nathaniel Rochester (an electrical engineer who designed the IBM 701, the first mass-produced general purpose computer).

As a direct result of this conference, AI was recognised as its own scientific field. This ushered in a prolific period of AI discovery, lasting from the late ‘50s to the mid ‘70s. During this time the leaders in AI were extremely optimistic, to the extent of Herbert Simon (another pioneer and champion of AI who wrote with Allen Newell the groundbreaking Logic Theory Machine and the General Problem Solver programs) predicting that “machines will be capable, within twenty years, of doing any work a man can do”. Nonetheless, despite this optimism, by the late seventies in it seemed as if they were no closer to a true AI than when they had started, and the major benefactors (US/UK governments) withdrew their funding. This signalled the collapse of pure true AI research from which, despite small resurgences, it would never really recover.

What the last thirty years did bring was the rise of the dedicated AI. Gone were the initial high hopes and dreams of a true AI that could pass the Turing tests, and thus perhaps be in need of Asimov’s three laws of robotics. Instead, the field of AI split into numerous disciplines, with each one generally motivated by practical commercial applications of the resulting research. This also brought about a dearth of interdisciplinary communications and the virtual abandonment of the quest for a true AI. However, it did lead to an explosion of new but mostly dumb or dedicated AI’s, which are now an indispensable part of our daily existence. But how exactly, and to what extent? Let’s take a closer look at two examples.

AI’s you quite probably interact with every day are GUI’s or Graphical User Interfaces. This is an AI- based, easily-understandable-by-humans graphical buffer between you and your PC, cellphone, ATM, etc. Without it, we would all have to understand and work in machine codes to get anything done on any type of computer. The basics of GUI’s first came about from the use of text-based hyperlinks (small codes that point to an address of a specific document or program) that one could click on with a mouse, which led to graphical-based hyperlinks developed by Xerox PARC in the ‘70s for their Xerox Alto computer. In 1981 Xerox released the Xerox 8010 Star Information System, which solely relied on a GUI-based operating system and featured a bit-mapped display whose icons pointed to folders and programs, leading to the GUI’s that we know so well today.

If any one discipline of AI is closest to the creation of a true AI, it is probably gaming development. If you’re a player of any video game at all today, then AI’s are what make them possible. To develop games that respond to a player in a relatively intelligent way, you can’t do without AI’s and their driving algorithms. Every single NPC (Non-Player Character) in any game is an AI — or AI Agent — and that AI Agent is further governed by the all-encompassing game AI that brings the entire game into being. It’s safe to say that gaming development has produced some of the most seminal advances in AI, and continues to do so today.

In a recent panel discussion at MIT’s ‘Brains, Minds, and Machines’ symposium this year, there was a call for a return to the original goals of AI research, and a complete re-think as how to achieve this. Panel member Patrick Winston, ex-director (’72-’97) of MIT’s Artificial Intelligence Laboratory stated, “When you dedicate your conferences to mechanisms, there’s a tendency to not work on fundamental problems, but rather those problems that the mechanisms can deal with,” highlighting the “mechanistic balkanization” of the field. He proposed a deeper investigation and understanding of what makes us uniquely human, and implementing these results into new paradigms from which to base further AI research. Basically stated: unless we know how we ourselves work and what makes us human, we cannot expect to replicate our humanness, to result in a true AI.

Noam Chomsky, also on the panel, had similar, but more specific, views to Winston. He espoused that considering human language as computational rather than cultural could be key in the development of a successful true AI (as opposed to the enduring Turing-like assumptions that using purely statistical mimicry to produce human behaviour will suffice — i.e. if you can’t tell the difference between a human and an AI then what’s the difference). The language route is something that Ruth Schulz and her colleagues have been pursuing with their Lingodroids, who make up their own non-human language by roaming around and creating their own individual place names. When they meet other Lingodroids, they learn how to talk with each other, and then use these collectively agreed-upon place names to build a language map of where they all are. These Lingodroids not only echo the very beginnings of human language, but also how it came about.

So, there are promising developments in true AI. However, the one thing that has held back research and has never seemed to come up in the MIT panel or elsewhere is funding – or rather, the lack of it. The majority of scientific funding goes into research with foreseeable commercial applications, and this is something that true AI research has yet to demonstrate. Without major funding the quest for true AI will remain so near yet so far, and be relegated to gaming and small-scale personal interest projects. Collectively, these projects might add up to a major breakthrough, and practical applications that could be funded big-time.

We are, I would say, probably closer to creating the first biological true AI (a la Craig Venter and his bacterium genome built from scratch in 2010), than we are to creating a digital one. This is because Venter is not re-inventing the wheel to do it, but building on what we already know. This is perhaps something that true AI enthusiasts like Winston and Chomsky realise; like a lot of quantum mechanics, we don’t necessarily have to understand it fully to use it practically.

If the following three disciplines could merge their work and results we might have a chance for success in true AI. The first is gaming AI development. The second is nanobionics, which is the research and development of an organic bridge between living neural cells and silicon semi-conductors and/or organic semiconductors. And the third is Organic Computing, AKA Artificial Neural Networks, which uses the existing biological templates of the human mind (especially the unique organic connectivity of parts of the brain to other parts, and our learning and problem-solving abilities) as a basis for forming new organic algorithms, programs, and even central processing units. Combine these three disciplines, perhaps add them to the potential power of quantum computing (in May 2011 Lockheed Martin announced together with D-Wave Systems they were building the world first commercial quantum computers), and maybe we will finally have a recipe that could, with much work, result in our first true AI. Until then, the dream remains firmly in the province of science fiction.

When we do finally achieve true AI, what will it mean? Could these machines supersede humans entirely? Or will true AI be the saving grace of humanity, enabling us to transform from biological beings to inorganic ones, perhaps downloading our “selves” into machines, effectively eliminating the need for doctors, for intrusive and painful surgeries and procedures? Could machines be man’s ticket to immortality? And at what price?

What is certain, is that these questions are being answered, one electronic synapse at a time, as you read this. Every advance brings with it new and ever-more pressing questions about how we relate to the machines that are already our colleagues, navigators, security guards, entertainers, babysitters and companions. As the lines between man and machine get fuzzier, the central question remains – what does it mean to be human, and do we have the copyright on sentience?


Image from Terminator: The Sarah Connor Chronicles © Sony Pictures Entertainment

[hana-code-insert name=’ArticleBlockOpen’ /]

Ivor Hartmann

Ivor Hartmann, is a Zimbabwean writer, currently based in Jhb, South Africa.
He is the author of Mr. Goop (Vivlia, 2010), and was nominated for the UMA Award (2009), and awarded The Golden Baobab Prize (2009).
His writing has appeared in African Writing Magazine, Wordsetc, Munyori Literary Journal, Something Wicked, and Sentinel Literary Quarterly, amongst others.
He is the editor/publisher of StoryTime, and co-editor/publisher African Roar.

[hana-code-insert name=’ArticleBlockClose’ /]

Comments are closed.