Saturday, May 31, 2008

Evolutionary Cities



It always struck me how parts of cities resemble natural patterns. Have you ever looked from above at a harbour or airport? In order to maximize the amount of docking space, the length of the border gets maximized by creating extensions like piers. This results in patterns that can look strikingly similar to bowel membrane, or to the branches of a tree.

Extend this concept, and what you get is Lilium Urbanus. This is an extremely beautiful piece of eye candy, created by Anca Risca and Joji Tsuruga, who graduated from SVA.

Check it out:



So what if our technologies, and our cities, autonomously grew along with us instead of being rationally designed according to some conceptual predictive model? That is where this blog is exactly about. Products that start as an open platform, and empathically and intimately guide the user in a cross-nurturing co-evolution.

Pure Science Faction

Before starting to ramble a bit on a project I found, let's first try to agree on a couple of things.

Firstly, I hope you agree with me that technology is not neutral. It is very tempting to think of technology as something totally distinct from humans, and that humans can choose to use technologies in any way according to their own intentions. This is a very human-centered and simplistic view in my opinion, rendering technology as something completely open, free to develop, and the design of it as less important than the way humans use it.

I think this view stems from the functionalistic way technologies have mainly been designed since the industrial revolutions. Because now, technological products could be mass produced, it seemed that this was the way to go. What people think they need so often seems to stem from what is possible and what is new. Now, for every perceived problem seemed to be a technological answer. The technologies developed put people in a mindset where they began to see all kinds of problems that could be solved by those technologies, without questioning if that problem would be the most pressing one to be solved, or if there would be an underlying problem that first needs to be solved. Or if the best intention for technologies is to have them solve problems in the first place. The 'solutions' most obvious to develop were that of time and money savers; the values behind the production processes naturally became reflected into the products they produced. Rational thinking became the next mode of being for humans; it was a challenge to maximize profit, and using abstract, externally imposed systems instead of social processes as a means to do so. This extreme, rational abstraction underlied the whole era of mechanization, and products needed to be sellable in the first place. The criterion for being sellable for a product was to convince people they need it, which came down to presenting the product as a solution for an abstract problem. People did not evaluate the introduction of this technology holistically, i.e. how it would change their entire lives, and humanity on a higher level. Products were seen as problem solvers, and man became proud of his achievement as it seemed that he transcended nature and was able to live in a technologized world. I think this affection for technology humanity developed in general is much related to why we question it so little, while we do question ourselves very much.

Another reason why we question technology so little is that it does not resemble us very much. This has two causes in my view. Firstly, products, since they were designed to solve specific problems in specific situations for specific people, looked very abstract, not social, and evoked little potential to empathize with. Our main experiential and bodily involvement was that of pressing some abstract buttons, codified semantically by means of shape, colour, icons, or not even explicitly codified at all. The relevant parts of products, in other words, the parts people needed to know about in order to get result y for intention x, had names, and people communicated mainly through language to understand technologies, with sentences like "if you press button a then door b opens, and compartment c gets emptied". They were not able to see the thing as a whole, integral entity, since technologies were seen as abstract tools, functional machines that output a desired material, energy, or information as a result from an input. Interacting with them was much like solving a mathematical formula, and this made people more rational beings. The processes we perceive shape the mind so it can cope with them. But the human brain is in my view too often tricked into believing that these processes are good and should be accepted, as this is the easiest thing to do. The brain is a self-optimizer, but inherently partially blind, so it must be steered by self-reflection.

The second cause of why we questioned technology so little in the modern period, and still don't, is related to human self-centeredness. Humans are social beings, evolutionary spoken in my view still not too far from other primates. Our brain is still hard-wired the way it was when we lived in a small group, and intra-species issues were more pervasive in daily life than anything else. So naturally we pay a lot of attention to anything that displays human characteristics. And because of this hard-wiring, people still instinctively feel the same things as primates, for example concerning issues of dominance. We therefore are instinctively preoccupied with thinking about other people instead of thinking about other entities in our world, like technologies. And it's a struggle overcoming these instincts.

Overcoming instincts though is not the only way for a symbiotic and equal relationship with technology though. Technology must be designed more humanized and social too. My point is that technology is not separate from us though it seems to be so, but that it is an intrinsic part of us. We externalize ourselves through our technologies, therefore they become part of humanity, as if they are our temporary organs. This view where humans co-evolve with their technologies, thus forming a new, more transformable and expanded organism, is more holistic, integrative and more attractive to me. At this point it is undeniable how we are connected with our technologies. For example, I am now linked up to the internet, sharing my views with millions of people. And astonishingly, within a radius of one meter from my head are located 24 electronic devices: three mobile phones, a wireless home phone, two digital cameras, a webcam, a laptop, three computer mouses, a graphic calculator, four USB flash drives, an MP3 player, two desk lamps, a massaging device, a printer, a Wacom tablet, a lavalamp and a water boiler. If we just lost a bit of self-centeredness it would be easy to see that these things are part of us, and that the concept we have of our selves, our bodies, and our minds, are only social constructs.

So if you see technologies as part of us, and things with which we necessarily socially interact, it is easy to see how they can not be neutral. Any human being strives for certain values more than for others, so the technologies he uses and designs will naturally incorporate these values. Philosophers like Andrew Feenberg and Peter-Paul Verbeek have already given strong arguments for this thesis. The latter for example shows how technologies amplify or reduce perceptions and invite or inhibit certain uses. This is in my opinion a much more subtle and complex view than the technological determinist view. It makes people feel responsible and thoughtful about their actions and the way they design technologies. Technologies are not neutral, they are value-biased.

The second thing on which I would like people to agree with me is about depth, and how human work should be consistent on an infinite amount of conceptual layers of complexity. It sounds hard, but it can be reached by a simple and seemingly childish exercise: asking yourself why you are doing it, and keep asking yourself why after answering the previous question. Then ultimately you would come to the essence of yourself and of your work, the underlying ideas. Then you can extrapolate these ideas and critically ponder what kind of world it would result in. This is your utopia. Do it for opposite ideas, and hence you have your dystopia. I'd say that when it's not completely clear why people are doing things, or there is confusion about whether it's good or bad, people should think a bit more about what it would mean to the world as a whole to do it too, instead of following gut instinct, which is inherently present-biased. I hope you will agree that having a clear and authentic vision underlying your work is a good thing to do. The work might fall under some category of work or art, but then still I don't find it respectable when people are not able to explicitly state the vision underlying the category.

This all leads to my point about the following project. It seems to have been designed by people who feel no responsibility whatsoever for the ethical side of technology design. But I might be generalizing. I am just at the same time bothered and amused about how developers and researchers can sometimes seem to be blind for the long-term consequences of what they are doing. Or more than that, I am bothered by their lack of incorporating a subtle and authentic feeling for what is beautiful and what is good into their creations. I might again be too worried, but this project just blew my mind because of the extent in which it mimics a certain dystopian science fiction movie.

At ModLab, a research laboratory for modular robots, people created a modular robot that can reassembly itself after exploding into different parts. Take a look at this:



Just for being analogical, here's an image from one of my favourite movies.




Now I have the feeling that these developments stem at least partially subliminally from movies, like visual imprints of the future people subconsciously try to work towards. The movie's point was that artificially intelligent machines could once exterminate humanity, but somehow this point is not what seeps through into development, rather the superficial effects seep through. I am not afraid or narcistically biased towards my own species, but simply think that not thinking through your work on more philosophical levels is not good. So I am not worried, only promoting to do good, and to deeply define what is good. Blindly developing technologies and seeing where applications pop up is not the way to go, since apart from introducing a separate technology you are also contributing to technological development as a whole. I see a lot of developments that, when combined and perfected, can lead to technologies exactly like the T-1000 in Terminator 2; a shape shifting, intelligent humanoid machine, consisting of nano-scale modules that cooperatively form a swarm that gives it capabilities like self-assembly, running faster than humans, feelings, and even conscious thought. I am only warning that it is not as harmless as it seems, and that technologies are slowly running out of control. This is already visible on the internet, where all kinds of autonomous bots are contributing to make the internet behave like a complex organism beyond our control. Douglas Rushkoff referred to this as the Datasphere. A map of the internet was recently made, showing how complex it is becoming, seemingly almost like the neural structure of the human brain:



So by looking at the internet we might be able to see where it is going when we start designing the same complexity into physical products; we need to be really careful. Our minds are already spoiled with so much useless information, but if technologies become more physically active and intelligent they will also directly affect our bodies. I am warning that if we are not self-critical and very careful, we might end up with machines that don't know how to interact with us socially, and cause great harm to us when they run out of control. Just like humans, they can develop thought processes that make them believe something is the right thing to do, while it's not. Just like humans, intelligent machines will need to struggle to learn what is truly good and beautiful. If we just put them out there as our children, unguided, they need to learn all this by experience, and, just like humanity, have to go through different phases or modes of being. For example, machines might develop egotistic behavior, because they are tricked that this will lead to their happiness or whatever quality it is they are optimizing.

I think that there could already quite soon be a chaos point, where different processes align and a technology is created that changes us radically, just as the internet has done in a few years, but now on the more direct, bodily level of the everyday life world. To prevent this, we must critically self-reflect, develop ourselves, and try to imbue machines with spiritual insights, like some cosmic machine consciousness.

I really hope to stir up some debate as to the ethical design of intelligent systems, but moreover I would promote social design processes, integrated into everyday life, to immediately, interactively, and iteratively integrate technological development with society. To keep technological development more under collective control, and not make unforeseen mistakes.

Monday, May 26, 2008

Hylozoic Soil



Again, a post on physically transforming structures. And again, featuring an artist/architect who was featured at MoMA's exhibition ' Design and the Elastic Mind'.  Philip Beesley makes organic installations that seem to make spaces come alive.

A recent work, Hylozoic soil, is based on hylozoism, the philosophical view that all matter contains life. It is an interactive space that exhibits complex behaviour as you walk through it, as if it wants to absorb you. There are nitinol-activated arms which gently move in reaction to people's behaviour, and pillars that seem to breathe, thus creating an intimate connection between man and machine, rendering the whole a seemingly hybrid organism.

What I like about Philip Beesley is his multisidedness; he integrates an artistic vision, high technology, and unique aesthetics, while also using analytical, rational processes towards the design of his structures, inspired by for example Buckminster Fuller and Chuck Hoberman. What could be expanded upon a lot is the extent of this integration; the behaviour is still quite minimal and seems dictated by technical constraints, as does the visual appearance. The materials used in his structures seem still quite mechanical and hard, much like a frozen spider web. For me it would be a brilliant piece if he would work on a structure that has a seamless soft skin, with an entire decentralized sensor network embedded into it, making for a richly responsive system that could give you really the experience you were part of the digestive system of a technological organism. But maybe that's a little too direct for him. I can also see his structures grow autonomously, perhaps fed by, metaphorically speaking, the empathy the structure gets from the people interacting with it, so the effect is a cross-nurturing of man and machine.

To make this happen of course many disciplines will need to join hands intensively. Especially I'm thinking of genetic engineering for this stuff. Let the great convergence begin!

Make sure to check out his Youtube page if you want to see the installations in action:

http://www.youtube.com/user/pbeesley1234

Environmentally responsive textiles




Staying on the topic of structural work, I'd like to mention the work of Rachel Wingfield together with Mathias Gmachl, who I both met last week in London where I was doing a workshop with Philips Design. Their website is www.loop.ph, for the hyperlink buffs among you.

Rachel designs and builds lightweight woven structures, which are in turn inspired by natural growth patterns. Her explorations led her for example to structures that can be packed flat and then expand again if you for example throw them away, which is quite stunning to see. I love her practical, explorative approach guided by nature; great discoveries aren't made by looking for them! And I'd love to see a physically adaptive structure once, which could adapt to people or the weather conditions...

Here is a little movie about a recent project of hers for the Royal Botanical Garden in London:

http://www.artisancam.org.uk/pages/artists/loop.ph/timelapse.php?mnbtn=2&icn=1