J.U. Lensing: HOEReographien

  • ©, J.U. Lensing







    Interactive performance


Artist Statement:

    HOEReographien’s starting point is questioning the dependence of classical dance on music. To what extent can movements and movement lines become audible in space? What will happen when music arises from movement and if, within that context, musicians and dancers interact? And what if the dancer’s body is filmed on the stage and converted in real time into a video sculpture that, in turn, interacts with human bodies on the stage to produce a conglomer­ate that produces material and virtual dance?

    If music results from the movement of dance and, therefore, the structure of the composition is not developed, adapted, and inter­preted through music composition, what is the role of the dancer? How will this affect dance?

    How do musical variations and development forms appear visually, in order to provide movement, resulting in a sound that is, at first, amorphous but later adopts an understandable form and structure? Which form of contemporary light and video art results from this interactive action?

    And how can this “new” process be made understandable for a live audience?

    HOEReographien is a cycle of single pieces (Soli, Pas de Deux, Trios Quartet) in the form of dance, through which electronic music is produced. Dance that develops video sculptures and dance from live structured improvisations, a constellation that, with mixed shapes, results in an overall visual composition in the form of “autonomous” dramatic art that supports the concept of “autonomous music.”

Technical Information:

    A black-and-white camera delivers 25 images per second to a PC running the software Eyecon, which transforms the pictures to controlling data for electronic sound and structures programmed in 3ds Max.

    Three mini-DV cameras each record another frame from the stage. For different sets in the performances, one of these three cameras receives its pictures from a Power Mac running Max/MsP/Jitter, which transforms the color-camera frames in Live-Video-Art. In a few sets, Max/MSP/Jitter receives control data from Powerbook Music-Max, so that even the dynamic of the changes in the video­sculptures are controlled by the movements of the dancers.