The development of the Gridjam, an international performance of specially composed music performed by six physically separated musicians, visualized real-time in the Virtual Color Organ, changed in its collaboration and technology during this fifteen year process.
Even though Gridjam reached the place where all of the people and venues were on board, and I presented the project at international conferences and symposia, we could not produce it in the end when funding did not come through. I describe Gridjam in the present tense because we could possibly realize this performance in the future.
When the performance begins, the viewer-listeners are in a world of hand-drawn landscapes, modeled into 3-D. All of the landscapes are black and white with a black sky. As the music plays a three-dimensional colored and image-embedded geometric structure takes shape in the space over the landscape; constructed from two-dimensional pictures of the landscape images representing the instrument families that are linked metonymically. These colored polygons each have a particular, transparent hue; I based the color on a timbre analysis of which instrument is sounding and what the particular playing technique is at that moment. The saturation of the color reflects changing dynamics (loud, soft, and the steps between them). These flat strips of landscape images appear up and down in a vertical space, determined by their pitch values. A higher pitch exists in a higher place than a low pitch. The pitch analogy corresponds to a vertical scale. The volume (in the attack) of the sound controls the width of the image embedded polygons. After the music has played there remains a complete sculpture that we can explore further interactively. The viewer can move at will through space and touch elements of the sculpture to hear the sound that initially produced it.
I created a process for visualizing sound files played by Alvin Curran, Gridjam's composer, on the Disklavier piano. These files could run from several seconds to over a minute and included events such as the throwing of coins, of an elephant trumpeting or Maria Calas singing one note. I modeled over two hundred of these sounds into 3D shapes that included pitch changes on the front and dynamic changes (loud and soft) on the top of each unit. When the player depresses a key, the model starts emerging until the key is released. The amount of time the key is depressed determines how much of or how many sound models appear in the virtual space.