When, in her 20s, Thecla Schiphorst quit a promising career in software design to dance, she hardly abandoned technology for art. She moonlighted by teaching computer science, then helped conceive a software tool that transformed choreography and was seized upon by dance designers the world over.
Among these was designer Merce Cunningham. Using LifeForms since 1989, the dance legend has created more than a dozen dances - including Ocean, staged in the round, comprising 19 sections and 15 dancers, and so complex as to be a choreographic near impossibility. "Technology and the dance are now mated," Cunningham says of Schiphorst's work.
"Dance is far more technical than computers can ever be,"Schiphorst tells Evantheia Schibsted, who caught up with the Vancouver-based artist in Berkeley, California.
Wired: LifeForms has been adopted by animators, special effects artists and videogame designers. But doesn't it run counter to the usual notion of creating movement - by experimenting on the human body?
Schiphorst: LifeForms is an idea generator. It lets you see even very complex movements as many times as you want from different perspectives. It's hard to ask a dancer to do a strenuous leap 17 times just to get new ideas. LifeForms lets you see movement through 3D, Michelin-like figures on a computer screen.
Merce Cunningham has said LifeForms "is not revolutionising dance but expanding it, because you see movement in a way that was always there - but wasn't visible to the naked eye." Do you agree?
Merce is clarifying a point, especially to the media, which tends to promote what I call the credo of the "technologically correct". One CNN reporter introduced a clip on LifeForms by saying, "Finally technology is coming to the rescue of choreographers." I never imagined technology rescuing choreographers. It's really the opposite: the non-linguistic knowledge inherent in physical training is a richly technical world that can inform technological development. One reason LifeForms operates so well is that our bodies work so well.
How can the body inform interface design?
Our current interface centres on that Cartesian notion of the mind-body split, a centuries-old mythology of disembodiment that devalues the physical. We're intoxicated with the idea of being disconnected from our bodies. In discussions of VR, for example, there's much talk about disembodiment and fear of it - even though head-mounted displays and datagloves are dependent on a sophisticated fine-tuning to the body.
I'm interested in embodiment. The only way we can actually live in a virtual world is through our bodies. Dance originates from learning, experiencing just how the physical body operates. Understanding movement comes not from the abstract, but from working physically. Everything I do concerns the embodiment of our physical language within technology.
This is especially true, isn't it, with your latest project, an installation that depends on participants' physical interaction in order to operate?
It's called Body Maps: Artifacts of Mortality, and experiencing it is about sensing, the sensual, touch. In computer technology we're not engaged physically in this way: mouse interaction, for example, is literal and simplistic. So for Body Maps, I've covered a table with white velvet to create a sensual surface. When you stroke the table, it detects the location and the intensity of your touch. It also detects the proximity of your hand and its gestures to the table. Embedded beneath the table are two grids, one of force-sensitive resistors, the other of electromagnetic-field sensors. When you caress the table, images of my body - immersed beneath the water or caught in flames - are projected onto the velvet surface from the ceiling. Also, there's an audio score of breathing and water sounds. So, for example, when you swish your hand six inches above the table it may sound as if you're mixing water in a pool. These images are archetypal; about the body and its relationships to water and the earth.
The beauty of the body in motion seems to elicit tenderness from participants. But you have said some people react by getting upset.
They don't understand the table's rules, which enable the piece to operate. But experiencing it is not about discovering its operative systems. This piece is not a videogame where you win or lose. It's about being in connection to it through one's body. By letting us listen to our physical experience of ourselves, this work moves from technology to art.
As you said, LifeForms is based on the rich and intelligent way the body functions. You can't make the program's figures do things that are physically impossible, such as turn a head completely around or bend a leg against the knee's natural range of movement. Now you're developing a new software for motion capture. What does this interface offer choreographers that LifeForms doesn't?
LifeForms synthesises movement very intelligently, but synthesis is not sampling. When you synthesise, you create movement within the computer tool rather than from the physical world. Motion capture is "choreographic sampling". Choreographers can use movement samples much as composers use sound samples. Choreographers are able to record, or capture, their own movement directly into the software. Sampling opens up the range of applications and complexity available. A hand gesture can be mapped to the spine and legs.
The interface software I'm developing uses an Ascension Technologies hardware called Flock of Birds. The six half-inch birds are radio transmitter/receiver sensors, which means they collect 3D XYZ coordinates. You can place them anywhere on your body. The software translates your movement from the birds into the computer system. The interface works with the human body model contained within the program, intelligence embedded within the algorithms.
Just six birds will capture the entire body.
Evantheia Schibsted is a freelance journalist who lives, works and dances in the San Francisco Bay area.