Thursday, September 18, 2008
To Bee-Have Or Not To Bee-Have
It's always interesting to see how patterns can emerge in a group without any individual knowing about it. But in this case of a bee swarm it's even more amazing since it is so evidently visible, for the actual purpose of this pattern is purely visual. It appears that, when a bee detects a specific anomaly in how another bee enters the hive, it can set off a wave-like pattern in the swarm in order to scare off a potential attacker. Watch it in action here:
This simple mechanism is another clear example of the power of swarm intelligence, which is more and more entering the picture nowadays. The coupling also has already been made to interactive product design. You will probably have seen Chris Woebken's video fragment showing his vision on 'Sensual Interfaces' in which products can dynamically emerge from small, intelligent particles.
Besides such projects, researchers in the field of swarm robotics are already working on realizing such physical, intelligent swarms. Watch for example this mind-bending video where a physical model of a car is directly manipulable in three-dimensional space:
These developments are ground-breaking, since technology can now start to develop its behaviour adaptively, from the bottom up. These developments may feel right because they are promising and new, but to me it is evident that they call for a lot of critical reflection and carefulness. I want to make sure that we are not blinded by technological progressivism but learn to evaluate our creations on an ethical level. Without a vision of the future that holds a broader perspective and takes human values into account, nobody can predict what will happen. Of course we more and more need to accept living in an unpredictable and uncontrollable world, but I want to prevent that technologies become that powerful that people start to fear them or mistrust them. To ensure that, I feel that there should always be a large variety of human stakeholders in the development of a technology, so it stays rooted within the complex system of human culture and society. Then slowly but surely, a co-evolution can start to arise where man and machine maintain a socially acceptable level of mutual understanding and abilities to meaningfully interact with each other.
I know that it sounds very much like an apocalyptic scenario of a random science fiction movie, but nevertheless I feel that there is the potential for a catastrophe that would dwarf Katrina. When machines go wild without having been enlightened as you might call it, the consequences are simply unimaginable. Of course nature is very inspiring, and I would promote further work in biomimicry in design for example, but the underlying pattern of evolution as we know it and its creations is not all that desirable, although it may seem to be so if you only peek beyond the surface but don't dig in to the core of it. Obviously, evolution is blind, and it optimizes locally, having specific groups converge at the cost of another group's probability to survive. It is mainly based on the immediate, direct perceptions of an entity and not on underlying models that helps entities understand each other. Studying natural phenomena like bee swarms teaches absolutely nothing about how things can meaningfully interact on a social and even intellectual level, while that is where the really beautiful but challenging complexity lies. For this there is no reference, since we humans are the first species having developed a neopallium that allows us to create internal models and reason. And even within our own species, we often seem quite clueless as to how to align and integrate our reflective abilities and our instincts.
In other words, what I mean to say is that when we develop technological swarms, they will probably over time start acting in one way or another like those bees, developing behaviour to optimize a local parameter. These swarms will align to old evolutionary patterns, while what we need is an entirely new pattern that we rationally construct ourselves and through iterative creative processes learn to embody in technologies.
Isaac Asimov has laid a foundation for this rational construction with his three laws of robotics, which later turned into four. I find it worth the effort to recall these explicitly, and we don't let them swirl in our minds as merely an inspirational bunch of words look beyond the words for meaning too, so here it goes:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such
orders would conflict with the first law.
3. A robot must protect its own existence as long as such protection does not
conflict with the first or second law.
(from 'Bicentennial Man', Asimov, I. (1976) as quoted in Anderson, S.L. (2008) Asimov’s ‘‘three laws of robotics’’ and machine metaethics, AI & Soc.)
The zeroth law, according to Wikipedia, is as follows:
0. A robot must not merely act in the interests of individual humans, but of all humanity.
Without going far into these specific laws, I can only say that to me it is evident that a law-based ethics just can not work. The concept of harm is a rather ill-defined one, to start with. To me harm, just as necessity, is a culturally constructed, relative concept. The threshold after which an action results in harm we can set as high as we want to. What the laws basically state is that we want to dominate machines. The laws correspond to old evolutionary patterns, although the zeroth law seems to make an attempt at building a holistically desirable system. But the law does not state anything about how it can possibly know what is good for humanity, or how it can even maintain a concept that condenses something of this proportion and complexity. What Asimov's laws merely make way for is a preprogrammed, cognitivist approach to technology design, while the future of technology is in emergent behaviour, where machines learn in interaction with the environment. This is, by the way, what Rolf Pfeifer advocates in his recent and recommendable book 'How the Body shapes the Way we Think'.
Anyhow, I think there is a pressing need for theoretical work along the lines of Asimov, but from the perspective of the new 'emergent' paradigm. Of course we can semantically influence machines such as with laws, but also from bodily interaction they must learn what the most satisfactory ways to behave are in this world. As indicated before, people already have the greatest trouble with this, even with admitting to their own weaknesses, and ideas about selflessness from for example Buddhism aren't integrated in contemporary cultures beyond a level of merely remaining superficial catchphrases. So a first pressing research question I would like to pose is:
How can one make a machine come to see that the ability to act in a selfless way is desirable?
Labels:
asimov,
bees,
buddhism,
enlightenment,
ethics,
interaction design,
law,
nanotechnology,
robotics,
selflessness,
swarm intelligence,
technology
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment