As I furiously
channel-surfed recently, in search of something palatable, I froze on the
unmistakable and soothing voice of HAL in 2001:
A Space Odyssey. 2001 is one of those timeless movie classics whose re-runs
always bring something new and thought-provoking. Despite mixed reviews, many
acknowledge 2001 as one of the greatest movies ever made. Directed by Stanley
Kubrick (who co-authored the story with science fiction master, Arthur Clarke),
2001 dabbles with artificial intelligence (AI) and
extraterrestrial life.
What distinguishes it from other movies of this genre is
how well the apparent surrealism is grounded into good science. The main
character in 2001 is a ubiquitous computer on a spacecraft. HAL could see,
listen, talk and understand his human “colleagues” anywhere on board. In fact,
HAL was the “man-in-charge”, leading a secret mission to search for life in
space. Another famous omnipresent, bad-boy computer is The Matrix, where self-aware,
sentient machines create a simulated reality to pacify the human population,
while using the energy in human bodies to survive.
Stanley
Kubrick later wrote another movie called AI, completed by Steven Spielberg
after Kubrick’s demise in 1999. This time, the computers were more
anthropomorphic, such as Jude Law as a gigolo-robot. Or Haley Osment, as a heart-breaking
child-robot with love for his mother that lasted millennia - talking of eternal
love, as long as there is power supply.
There
is no dearth of fictional characters to portray good and evil robots, androids
and gynoids (woman robots). From Kalevala,
the prehistoric woman forged out of gold in Finnish myth, to Governator
Arnold Schwarzenegger in the Terminator series – or my favourites, C-3PO and
R2-D2, the odd couple of Star Wars, and affectionate names for my twin kids!
HAL
was a good excuse to reassess the progress of artificial intelligence. Is AI for real, or just a fad in academic and science
fiction circles? There was a lot of excitement about the promises of AI when
the field started in 1950s. But this was followed by an “AI winter”, when many
were disappointed and research funding ran dry. Sceptics argue that simulating
humans and their intelligence is decidedly difficult or even impossible – given
technological as well as existential barriers.
We are still a long
way from self-aware humanoids serving us coffee and doing our taxes.
Nonetheless, there are some remarkable feats of AI, often taken for granted. Like
Deep Blue’s historic victory against chess world champion Gary Kasparov in 1997. Or earlier this year, two
champions at Jeopardy (the “Questions pour un champion” of USA) were defeated
by another IBM computer dubbed Watson. And
there are countless practical applications where computers have
accomplished tasks in human (or super-human) fashion, well beyond complicated
arithmetic: taking complex decisions on assembly lines or cleaning your house
(robotics), diagnosing issues and suggesting solutions in knowledge-intensive
fields such as medicine or law (expert systems), seeing the world and
interpreting images (computer vision), anticipating the effects of moves before
taking them, even on remote planets (planning), programs that search online and
extract knowledge (intelligent agents
and data mining), or even artificial
pets and robots to comfort you (emotional agents and affective computing).
AI seems to be reviving
gradually. Some even foresee that AI will soon alter history. Scientists like Raymond
Kurzweil and Vernor Vinge talk of a “singularity”, a turning point when machine
intelligence would surpass human intelligence and take over the process of
technological invention, with unpredictable consequences. Will reality be eventually
stranger than fiction? Who knows? Maybe some computers will. (To be continued).
No comments:
Post a Comment