It's 2027, the singularity has been passed, and machines have realized they are evolutionary superior over those fleshy humans in every aspect, the latter merely remaining sentimental relics, which has led them to create destructive plots against humanity. Of course the three laws do not work, and there is nothing that keeps them from acting out these plots. But there is cute little Zeno, a cross-over between Harry Potter, John Connor, and Furby, who is training at an academy and about to save humanity from doom.
That's roughly the story behind this consumer robot currently being developed by Hanson Robotics, and intended to hit the stores in 2010. Zeno has voice recognition and voice synthesis, so you can have conversations with him to some extent. Zeno's bodily movements are generated through state of the art artificial intelligence software which is also used for character animation in the movie industry. Not only that, he also has the ability to recognize faces and facial expressions, and is able to respond with his own facial expressions and call people by their names.
I wonder (those are the usual kind of words with which my criticism starts) if Zeno was named after Zeno of Elea, the ancient Greek philosopher who was famous for his paradoxes with which he meant to destroy arguments of others, and that in a way show the ridiculousness of logic itself. His arguments often involved an application of the logical concept of infinity onto the real world, which leads to absurd conclusions such that a person could never catch a bus or even move in any way, since to get there he would always have to reach half the distance first, and to reach that distance reach half that distance first, ad infinitum. Maybe there is an underlying message to Zeno about the apparent absurdity of the claim that this robot should save humanity, while the developers do seriously consider the potential of humanity being wiped off the planet. Maybe the name is a self-mocking statement that nevertheless should make people think. In that sense it has the same message as movies like 'I, Robot', but might be more impactful since it's interactive, and both physically and emotionally closer to humans. If this product could arouse debate and stir up thought among the regular 'consumer public', that would be an enormous step, since people are in my experience usually still quite ignorant and clueless about the future.
That is the positive criticism. My negative criticism would be a very general rant about products being developed for the 'infantile consumer', as Benjamin Barber calls it. I'll spare you most of it because I hope it is unnecessary to lay out the obvious. What I can add to general anticonsumerist talk comes from a Buddhist perspective, that of a transcendence from the ego to the ego-less Self. If you cannot follow what I am saying, please read into Buddhism, because I am not here to teach that, would not be able to, and am merely inspired by it. I am here to criticize technological developments and make sure we can learn to develop enlightened machines, beautifully attuned to cosmic evolution.
Developments like Zeno only make people aware of the issues around robot development and AI, but they don't facilitate the transcendence that is truly needed and instead reinforce the mode of being that people are already in. To state it more simply, Zeno's story is one of people being in a self vs. other relationship with robots, the latter being either friends or enemies. This is a judgmental approach that most people so desperately cling on to in their lives, also against other people, in order to try to protect and preserve their own egos. But the fundamental issue is that people must learn to overcome this ego-centered thinking, and transcend their ego to start living with a positive attitude that has no external conditions for the positivity to arise. The sad thing about consumer product development processes is that they do not facilitate this process at the core, but merely at the surface, the old pattern at the core being reinforced. This leaves people with no clue and increasing internal tensions about how to align their superficial processes with contradicting related processes at the core. And the brain's usual coping strategy for cluelessness is escape and ignorance.
I feel rather alone on this mission to transcend humanity through technology, but it cannot demotivate me in any way because for me it's evidently inevitable: at some point humanity will come to face itself when it has externalized its own entire being into technology up to a point that it becomes projected back at them like the ultimate mirror. This might be the point that was mentioned by Stephen Hawking when he stated that the reason we don't find life in outer space might be because at some point, civilizations destroy themselves.
I don't think, by the way, that the Singularity will be this ultimate point. That in my opinion is an overhyped term that brings about some awareness, but is not to be taken much more seriously than the Y2K bug. But it is a point that is coming, and it is not evident to me that a world full of robots will be 'a lot of fun', as Rodney Brooks so optimistically stated in his recent talk at TED. I don't think that his argument of deliberateness is very rigorous, and that deliberateness is merely an illusory concept we project onto our worlds to make them more manageable and understandable. From a biological and social point of view it works, but not from a Self-development point of view. In other words, egoic machine behaviour will just emerge; like robots are now already 'learning' through physical interaction to develop physical behaviour for a physical environment, they will also 'learn' through mental interaction to develop mental behaviour that helps them succeed in this environment. In yet other words, machines will develop their own religion, their own politics, and their own spirituality.
It makes sense from an evolutionary perspective too, and what I am about to explain here should truly be a key insight. If a species is biologically superior over others so it has no natural enemies anymore, it is safe on this biological level. There is not really a question that the homo sapiens sapiens has attained this safety. Then, if a social meme within the species becomes superior over others, it starts to spread and make humanity sustainable on a social level, so in-species groups don't eradicate each other. We are struggling, but definitely getting there through our global communication web. But attaining a socially sustainable state of being is not where it ends. Humanity must learn to live not only with itself, but with everything, with all the data that enters him, and of which he is part. This is where he needs to learn to think holistically, and selflessly. Then if we cannot attain this holistically sustainable state, it is evident that it is our time to become extinct, since we are apparently not the form evolution is looking for, and another form that is more successful must inexorably be found. Of course, evolution is only a concept to explain things, but I think that it is the most beautiful and succesful one we have and moreover, that evolution-based thinking can help us transcend.
This image I borrowed from Kevin Kelly's website 'The Technium', and describes the landscape of intelligence; a mind is always evolving to an optimum, but this optimum is probably merely a local one, and once at this optimum is stuck there. It can be applied to a spiritual level of ego transcendence, by stating that humanity is climbing a similar mountain, but might get stuck if our underlying biological, social, and intellectual patterns do not allow us to reach higher. And at the point that we reach the top it is almost impossible to change these patterns and explore another part of the landscape, so the only option is to wipe ourselves out. Now my message is that we need soon to evaluate if we are on the right mountain, and complement all the developments that are taking us higher upon the mountain we are currently exploring, developments like Zeno, by developing technologies that have entirely different underlying development processes. It needs to happen, since you might already have gotten my intuition that we are definitely not on the right mountain. If not subsidized by governmental institutions or sponsored by corporations, then maybe it needs to happen silently in our private attics and backyards, as a joint creative project. I think that is the way it can be done; a lot of people are passionate and creative, but lack a unifying sense about why they are doing the things they are doing. A sense that I hope to bring about.
Some directions for development I think can be found in the mirror neurons we have; like us, robots should be able to project themselves in their perceptions. But unlike us, they must learn to be able to project them into anything, and not make distinctions between things on the basis of how much they seem similar to the concept of self the machine entertains. This comes through developing a motor repertoire as large as possible, probably through a shared process, since our motor repertoire storing our highly specific actions for highly specific situations determines when our mirror neurons fire and when they do not fire.
Machines should in that sense be unsurpassably open-minded and open-bodied.
No comments:
Post a Comment