Saturday, May 31, 2008

Pure Science Faction

Before starting to ramble a bit on a project I found, let's first try to agree on a couple of things.

Firstly, I hope you agree with me that technology is not neutral. It is very tempting to think of technology as something totally distinct from humans, and that humans can choose to use technologies in any way according to their own intentions. This is a very human-centered and simplistic view in my opinion, rendering technology as something completely open, free to develop, and the design of it as less important than the way humans use it.

I think this view stems from the functionalistic way technologies have mainly been designed since the industrial revolutions. Because now, technological products could be mass produced, it seemed that this was the way to go. What people think they need so often seems to stem from what is possible and what is new. Now, for every perceived problem seemed to be a technological answer. The technologies developed put people in a mindset where they began to see all kinds of problems that could be solved by those technologies, without questioning if that problem would be the most pressing one to be solved, or if there would be an underlying problem that first needs to be solved. Or if the best intention for technologies is to have them solve problems in the first place. The 'solutions' most obvious to develop were that of time and money savers; the values behind the production processes naturally became reflected into the products they produced. Rational thinking became the next mode of being for humans; it was a challenge to maximize profit, and using abstract, externally imposed systems instead of social processes as a means to do so. This extreme, rational abstraction underlied the whole era of mechanization, and products needed to be sellable in the first place. The criterion for being sellable for a product was to convince people they need it, which came down to presenting the product as a solution for an abstract problem. People did not evaluate the introduction of this technology holistically, i.e. how it would change their entire lives, and humanity on a higher level. Products were seen as problem solvers, and man became proud of his achievement as it seemed that he transcended nature and was able to live in a technologized world. I think this affection for technology humanity developed in general is much related to why we question it so little, while we do question ourselves very much.

Another reason why we question technology so little is that it does not resemble us very much. This has two causes in my view. Firstly, products, since they were designed to solve specific problems in specific situations for specific people, looked very abstract, not social, and evoked little potential to empathize with. Our main experiential and bodily involvement was that of pressing some abstract buttons, codified semantically by means of shape, colour, icons, or not even explicitly codified at all. The relevant parts of products, in other words, the parts people needed to know about in order to get result y for intention x, had names, and people communicated mainly through language to understand technologies, with sentences like "if you press button a then door b opens, and compartment c gets emptied". They were not able to see the thing as a whole, integral entity, since technologies were seen as abstract tools, functional machines that output a desired material, energy, or information as a result from an input. Interacting with them was much like solving a mathematical formula, and this made people more rational beings. The processes we perceive shape the mind so it can cope with them. But the human brain is in my view too often tricked into believing that these processes are good and should be accepted, as this is the easiest thing to do. The brain is a self-optimizer, but inherently partially blind, so it must be steered by self-reflection.

The second cause of why we questioned technology so little in the modern period, and still don't, is related to human self-centeredness. Humans are social beings, evolutionary spoken in my view still not too far from other primates. Our brain is still hard-wired the way it was when we lived in a small group, and intra-species issues were more pervasive in daily life than anything else. So naturally we pay a lot of attention to anything that displays human characteristics. And because of this hard-wiring, people still instinctively feel the same things as primates, for example concerning issues of dominance. We therefore are instinctively preoccupied with thinking about other people instead of thinking about other entities in our world, like technologies. And it's a struggle overcoming these instincts.

Overcoming instincts though is not the only way for a symbiotic and equal relationship with technology though. Technology must be designed more humanized and social too. My point is that technology is not separate from us though it seems to be so, but that it is an intrinsic part of us. We externalize ourselves through our technologies, therefore they become part of humanity, as if they are our temporary organs. This view where humans co-evolve with their technologies, thus forming a new, more transformable and expanded organism, is more holistic, integrative and more attractive to me. At this point it is undeniable how we are connected with our technologies. For example, I am now linked up to the internet, sharing my views with millions of people. And astonishingly, within a radius of one meter from my head are located 24 electronic devices: three mobile phones, a wireless home phone, two digital cameras, a webcam, a laptop, three computer mouses, a graphic calculator, four USB flash drives, an MP3 player, two desk lamps, a massaging device, a printer, a Wacom tablet, a lavalamp and a water boiler. If we just lost a bit of self-centeredness it would be easy to see that these things are part of us, and that the concept we have of our selves, our bodies, and our minds, are only social constructs.

So if you see technologies as part of us, and things with which we necessarily socially interact, it is easy to see how they can not be neutral. Any human being strives for certain values more than for others, so the technologies he uses and designs will naturally incorporate these values. Philosophers like Andrew Feenberg and Peter-Paul Verbeek have already given strong arguments for this thesis. The latter for example shows how technologies amplify or reduce perceptions and invite or inhibit certain uses. This is in my opinion a much more subtle and complex view than the technological determinist view. It makes people feel responsible and thoughtful about their actions and the way they design technologies. Technologies are not neutral, they are value-biased.

The second thing on which I would like people to agree with me is about depth, and how human work should be consistent on an infinite amount of conceptual layers of complexity. It sounds hard, but it can be reached by a simple and seemingly childish exercise: asking yourself why you are doing it, and keep asking yourself why after answering the previous question. Then ultimately you would come to the essence of yourself and of your work, the underlying ideas. Then you can extrapolate these ideas and critically ponder what kind of world it would result in. This is your utopia. Do it for opposite ideas, and hence you have your dystopia. I'd say that when it's not completely clear why people are doing things, or there is confusion about whether it's good or bad, people should think a bit more about what it would mean to the world as a whole to do it too, instead of following gut instinct, which is inherently present-biased. I hope you will agree that having a clear and authentic vision underlying your work is a good thing to do. The work might fall under some category of work or art, but then still I don't find it respectable when people are not able to explicitly state the vision underlying the category.

This all leads to my point about the following project. It seems to have been designed by people who feel no responsibility whatsoever for the ethical side of technology design. But I might be generalizing. I am just at the same time bothered and amused about how developers and researchers can sometimes seem to be blind for the long-term consequences of what they are doing. Or more than that, I am bothered by their lack of incorporating a subtle and authentic feeling for what is beautiful and what is good into their creations. I might again be too worried, but this project just blew my mind because of the extent in which it mimics a certain dystopian science fiction movie.

At ModLab, a research laboratory for modular robots, people created a modular robot that can reassembly itself after exploding into different parts. Take a look at this:



Just for being analogical, here's an image from one of my favourite movies.




Now I have the feeling that these developments stem at least partially subliminally from movies, like visual imprints of the future people subconsciously try to work towards. The movie's point was that artificially intelligent machines could once exterminate humanity, but somehow this point is not what seeps through into development, rather the superficial effects seep through. I am not afraid or narcistically biased towards my own species, but simply think that not thinking through your work on more philosophical levels is not good. So I am not worried, only promoting to do good, and to deeply define what is good. Blindly developing technologies and seeing where applications pop up is not the way to go, since apart from introducing a separate technology you are also contributing to technological development as a whole. I see a lot of developments that, when combined and perfected, can lead to technologies exactly like the T-1000 in Terminator 2; a shape shifting, intelligent humanoid machine, consisting of nano-scale modules that cooperatively form a swarm that gives it capabilities like self-assembly, running faster than humans, feelings, and even conscious thought. I am only warning that it is not as harmless as it seems, and that technologies are slowly running out of control. This is already visible on the internet, where all kinds of autonomous bots are contributing to make the internet behave like a complex organism beyond our control. Douglas Rushkoff referred to this as the Datasphere. A map of the internet was recently made, showing how complex it is becoming, seemingly almost like the neural structure of the human brain:



So by looking at the internet we might be able to see where it is going when we start designing the same complexity into physical products; we need to be really careful. Our minds are already spoiled with so much useless information, but if technologies become more physically active and intelligent they will also directly affect our bodies. I am warning that if we are not self-critical and very careful, we might end up with machines that don't know how to interact with us socially, and cause great harm to us when they run out of control. Just like humans, they can develop thought processes that make them believe something is the right thing to do, while it's not. Just like humans, intelligent machines will need to struggle to learn what is truly good and beautiful. If we just put them out there as our children, unguided, they need to learn all this by experience, and, just like humanity, have to go through different phases or modes of being. For example, machines might develop egotistic behavior, because they are tricked that this will lead to their happiness or whatever quality it is they are optimizing.

I think that there could already quite soon be a chaos point, where different processes align and a technology is created that changes us radically, just as the internet has done in a few years, but now on the more direct, bodily level of the everyday life world. To prevent this, we must critically self-reflect, develop ourselves, and try to imbue machines with spiritual insights, like some cosmic machine consciousness.

I really hope to stir up some debate as to the ethical design of intelligent systems, but moreover I would promote social design processes, integrated into everyday life, to immediately, interactively, and iteratively integrate technological development with society. To keep technological development more under collective control, and not make unforeseen mistakes.

No comments:

Post a Comment