Monday, September 29, 2008
NeoCube
Pretty interesting an artefact, this NeoCube, or should I say artefacts, since it is composed of 216 smaller magnets. Considering this could have been made already back in the 1930s it makes you wonder why such a product gets released only just now. It does have a little bit of a pseudo science-fiction flavour to it, and it's interesting that it seems to follow the simple but more and more ubiquitous idea of having many simple elements make up something more than the sum of its parts.
Seeing the NeoCube in action can be quite inspiring if you relate it to the way products could become reconfigurable by the end-user, or could be calmly self-reconfiguring based on how its user interacts with it.
More on http://www.theneocube.com
Sunday, September 28, 2008
Head blob
What's more fun to do at one's birthday than a freestyle drawing...not much I figured, so here's my little multi-interpretable illustration of today. It's something I had in my mind for a while, which lately contains more and more surreal imagery. It probably says a lot about my psyche too, can anybody tell me about that?
Anyway, I'll leave it mostly up to you what this image would mean. I guess I would interpret it as multiple identities becoming holistically interconnected, as if our faces are becoming the synapses of the emerging global brain.
Thursday, September 18, 2008
To Bee-Have Or Not To Bee-Have
It's always interesting to see how patterns can emerge in a group without any individual knowing about it. But in this case of a bee swarm it's even more amazing since it is so evidently visible, for the actual purpose of this pattern is purely visual. It appears that, when a bee detects a specific anomaly in how another bee enters the hive, it can set off a wave-like pattern in the swarm in order to scare off a potential attacker. Watch it in action here:
This simple mechanism is another clear example of the power of swarm intelligence, which is more and more entering the picture nowadays. The coupling also has already been made to interactive product design. You will probably have seen Chris Woebken's video fragment showing his vision on 'Sensual Interfaces' in which products can dynamically emerge from small, intelligent particles.
Besides such projects, researchers in the field of swarm robotics are already working on realizing such physical, intelligent swarms. Watch for example this mind-bending video where a physical model of a car is directly manipulable in three-dimensional space:
These developments are ground-breaking, since technology can now start to develop its behaviour adaptively, from the bottom up. These developments may feel right because they are promising and new, but to me it is evident that they call for a lot of critical reflection and carefulness. I want to make sure that we are not blinded by technological progressivism but learn to evaluate our creations on an ethical level. Without a vision of the future that holds a broader perspective and takes human values into account, nobody can predict what will happen. Of course we more and more need to accept living in an unpredictable and uncontrollable world, but I want to prevent that technologies become that powerful that people start to fear them or mistrust them. To ensure that, I feel that there should always be a large variety of human stakeholders in the development of a technology, so it stays rooted within the complex system of human culture and society. Then slowly but surely, a co-evolution can start to arise where man and machine maintain a socially acceptable level of mutual understanding and abilities to meaningfully interact with each other.
I know that it sounds very much like an apocalyptic scenario of a random science fiction movie, but nevertheless I feel that there is the potential for a catastrophe that would dwarf Katrina. When machines go wild without having been enlightened as you might call it, the consequences are simply unimaginable. Of course nature is very inspiring, and I would promote further work in biomimicry in design for example, but the underlying pattern of evolution as we know it and its creations is not all that desirable, although it may seem to be so if you only peek beyond the surface but don't dig in to the core of it. Obviously, evolution is blind, and it optimizes locally, having specific groups converge at the cost of another group's probability to survive. It is mainly based on the immediate, direct perceptions of an entity and not on underlying models that helps entities understand each other. Studying natural phenomena like bee swarms teaches absolutely nothing about how things can meaningfully interact on a social and even intellectual level, while that is where the really beautiful but challenging complexity lies. For this there is no reference, since we humans are the first species having developed a neopallium that allows us to create internal models and reason. And even within our own species, we often seem quite clueless as to how to align and integrate our reflective abilities and our instincts.
In other words, what I mean to say is that when we develop technological swarms, they will probably over time start acting in one way or another like those bees, developing behaviour to optimize a local parameter. These swarms will align to old evolutionary patterns, while what we need is an entirely new pattern that we rationally construct ourselves and through iterative creative processes learn to embody in technologies.
Isaac Asimov has laid a foundation for this rational construction with his three laws of robotics, which later turned into four. I find it worth the effort to recall these explicitly, and we don't let them swirl in our minds as merely an inspirational bunch of words look beyond the words for meaning too, so here it goes:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such
orders would conflict with the first law.
3. A robot must protect its own existence as long as such protection does not
conflict with the first or second law.
(from 'Bicentennial Man', Asimov, I. (1976) as quoted in Anderson, S.L. (2008) Asimov’s ‘‘three laws of robotics’’ and machine metaethics, AI & Soc.)
The zeroth law, according to Wikipedia, is as follows:
0. A robot must not merely act in the interests of individual humans, but of all humanity.
Without going far into these specific laws, I can only say that to me it is evident that a law-based ethics just can not work. The concept of harm is a rather ill-defined one, to start with. To me harm, just as necessity, is a culturally constructed, relative concept. The threshold after which an action results in harm we can set as high as we want to. What the laws basically state is that we want to dominate machines. The laws correspond to old evolutionary patterns, although the zeroth law seems to make an attempt at building a holistically desirable system. But the law does not state anything about how it can possibly know what is good for humanity, or how it can even maintain a concept that condenses something of this proportion and complexity. What Asimov's laws merely make way for is a preprogrammed, cognitivist approach to technology design, while the future of technology is in emergent behaviour, where machines learn in interaction with the environment. This is, by the way, what Rolf Pfeifer advocates in his recent and recommendable book 'How the Body shapes the Way we Think'.
Anyhow, I think there is a pressing need for theoretical work along the lines of Asimov, but from the perspective of the new 'emergent' paradigm. Of course we can semantically influence machines such as with laws, but also from bodily interaction they must learn what the most satisfactory ways to behave are in this world. As indicated before, people already have the greatest trouble with this, even with admitting to their own weaknesses, and ideas about selflessness from for example Buddhism aren't integrated in contemporary cultures beyond a level of merely remaining superficial catchphrases. So a first pressing research question I would like to pose is:
How can one make a machine come to see that the ability to act in a selfless way is desirable?
Wednesday, September 3, 2008
Web 782.0?
Nike has taken on the idea that in the future, products might not be mass manufactured by people, but just emerge from nature when we have genetically altered it or taught it certain behavioural patterns. Where insects can now be seen as nature's robots, uncontrollable and unpredictable, they soon could be people's robots. If controlled through the internet, online and offline webs can finally merge.
Yes, paradigm changes are still slowly creeping in... this is not merely nice imagery.
Tuesday, September 2, 2008
Self-Expansive Mirrors
Here are a few brilliant projects on mirrors that show an image you can project yourself into in a different way than a usual mirror does. Hence, they can make you see yourself in a different way, and change your behaviour or even attitudes.
Shown above is the work of Daniel Rozin, who pioneered the area of interactive mirrors. His famous installations subtly play with distortions and abstractions in size and shape, while still mostly pertaining to the perceived data. To me, his mechanical mirrors especially emphasize in an ambiguous manner that in the future, technologies can learn to reflect parts of our inner selves in any means that it can find to accomplish this with. Even waste material we have thrown away might directly make an unexpected turn and start behaving in a possibly rather confronting way.
Rozin's software mirrors also reflect a quite realistic self-representation of the viewer, but as well make him more sensitive about other parts of life instead of how he looks. His shaking time mirror, for example, shows in a rather exaggerated way how time affects us, and how easy it is to use time for active self-transformation, as if we could just shake off our inner tensions, bad habits, traumas or other kinds of mental debris that seemed stuck to us.
Furthermore, Rozin made a few low-tech sculptural pieces that explore the role of mirrors in providing us with different self-representations than we would expect. A counterexpected effect is most strongly embodied in Rozin's 'Broken Mirror', which consists of a fragmented mirror, that through an ingenious physical configuration of the fragments shows not a reflection of the viewer, but an image of an old woman that is printed in fragments on a wall behind the viewer. Hence, for a moment the viewer is tricked, although it will probably not be impacting to an extent as to really make the viewer ponder existential issues of any kind.
Golan Levin and Zachary Lieberman have approached the same line of work from a more social perspective and make construction of the self-representation a collaborative project. Their installation 'reface' takes samples of video directly from viewers who stand before the wall-hung device, and mixes up parts of video from different viewers, so you might see yourself as you are, but with the hair of your, say, american-indian grandfather and the surgically altered lips of your twin sister who also happened to be at the exhibition. This way of altering self-representations, to me, is on the edge of gimmickiness because of its obvious artificiality and would be much stronger if a database of recorded videos was input for a system that would dynamically morph your face, and change temporal and spatial parameters for example as a function of how you behave in front of the 'mirror', of how you feel or even of how or what you think.
Then another installation by Golan Levin, the Optoisolator, is less obviously a mirror, but definitely offers people a means to project themselves into an object and evoke focused self-reflective thought. It consists of a black box, attached to a wall at eye level, in which a mechanically moving eye resides that looks straight towards the viewer and follows him around the room, thus engaging him as in a game of staring. An even stronger connection is made by having the eye blink one second after whenever the viewer blinks their eyes, employing a technique called eye blink detection. By putting the viewer inevitably in the center of attention, he becomes an intrinsic part of the artwork, which makes it impossible to take on a detached, third person perspective of that of a mere observer, or even as an interactor without individuality. With technologies behaving like this, people would definitely be more invited to overcome traits like shyness.
To conclude on the psychological impact of such technologies, Yee and Bailenson's 2007 study showed that people conform to the expectations and stereotypes of the identity of their self-representation, which gives me faith in that these kind of technologies can have a truly immense transformative impact, if designed well for specific people in specific contexts. Considering the latter issues, these artistic translations of technology can merely be seen as explorations into a new paradigm where technologies are purposefully designed and even learn to guide and change people in direct and personal ways that people comply with much more than with conscious transformations through more semantic means like advertisements, therapies, or even laws.