by Ramin Akhavijou
The artist’s statement:
In this project, an exploration was undertaken to create virtual instruments that challenged the limitations of conventional music-making through the utilization of technology. To accomplish this, I delved into the realm of ultrasonic sensors, which are electronic devices capable of accurately measuring the distance between themselves and a target item. These sensors achieve this feat by emitting ultrasonic sound waves, which then bounce off objects and return to the sensor. By analyzing the time it takes for the sound waves to travel and return, the sensor calculates the precise distance.
Armed with this advanced sensor technology, I set out to integrate it into my musical endeavor. I began by designing a system that converted the sensor’s output, namely the reflected sound, into an electrical signal through using Arduino, an open-source electronics platform that consists of both hardware and software components. This signal, with its unique characteristics, became the foundation for generating musical sounds in my digital realm.
To bring my vision to life, I turned to Max/MSP, a software environment renowned for its ability to manipulate and process audio and visual data in real-time. Within this creative playground, I embarked on the intricate task of mapping the sensor data to musical notes/sounds. I carefully assigned each note/sound to a specific distance or range of distances captured by the ultrasonic sensors. This mapping process allowed me to construct a virtual instrument that responded dynamically to changes in the target item’s proximity to the sensor. By adjusting the distance of my hand from the sensor, I virtually “played” notes on this instrument. It was a truly immersive experience, as my movements directly influenced the music that resonated from the digital realm.
The result was a fusion of technology and art, where the virtual instrument became an extension of my creative expression. I found myself exploring new sonic territories, as the interplay between physical movement and musical output opened up endless possibilities. Each gesture and adjustment of my hand created a unique auditory response, enabling me to sculpt sounds in real-time, all through the medium of distance manipulation. My journey into the world of virtual instruments, propelled by the utilization of ultrasonic sensors, Arduino, and Max/MSP, allowed me to venture beyond my conventional realm of music-making.
The artist’s statement:
Soundstallation 2 represents a fusion of water, touch, technology, and audiovisual elements. The Water Detection sensor’s ability to respond to water droplets allowed for dynamic control over the sound and visual aspects of the installation.
In Soundstallation 2, I incorporated a Water sensor to manipulate the speed and frequency of the sound in my solo clarinet piece (performed and recorded by Thomas Piercy in 2019). This Water sensor also played a significant role in displacing the projected video and altering its color on the wall.
To commence the installation, I captured my painting on the wall and projected it back onto the same surface, resulting in a visually superimposed effect that added depth to the overall presentation. The Water sensor became an important component of the installation, reacting to the impact of water droplets. Whenever drops of water landed on the sensor, it transmitted a signal to Arduino, which then analyzed the quantity of water captured by the sensor. Utilizing this data in Max/MSP, I employed it to control both the sound and video elements.
As a consequence, the projected video experienced displacement, subtly shifting its position in response to the water droplets. Moreover, the colors within the video underwent shifts, engendering a visual transformation. In pair with the visual effects, the Water sensor also exerted influence over the sound component of the installation. When the droplets made contact with the sensor, it triggered changes in both the speed and frequency of the music.
Running Gertrude Stein’s poem “If I Told Him, A Completed Portrait of Picasso” was a project that involved the integration of three computers, four displays, and two microcontrollers. These components worked in unison to control and monitor two ultrasonic sensors. The primary objective was to utilize the data gathered from the sensors and send them to Max/MSP to dynamically modify the speed of the accompanying audio file. This approach created an auditory experience where the poem’s tempo varied depending on the sensors’ data. As the distance between the sensors and my hand decreased, the audio playback slowed down, and vice versa.
Moreover, the sensor data played a role in influencing the visual aspect of the performance as well. By harnessing the real-time information, the color scheme of the accompanying video was dynamically controlled. As the distance measured by the sensors increased, the movie unfolded in a vibrant burst of colors, gradually transforming into a kaleidoscope of hues. To enhance the overall experience, the program generated a series of random words extracted from Stein’s poem, which were then displayed across four distinct screens. This display of text added an additional layer of depth to the performance, intertwining the words of the poem with the fluctuations in sensor data.
In Soundstallation 4, I used a circuit I designed, which involved applying conductive tape onto both a paper board and a table surface. This setup facilitated connections to an electricity source on one side and 3V DC vibration motors on the other. The flow of electrical current was controlled using a conductive glove as an on/off switch. I selected a diverse range of objects with varying sound qualities and materials and attached the motors to them. The resulting auditory experience encompassed a diverse range of buzzing sounds generated either by vibrating the objects or by the motors themselves. Two distinct types of vibration motors were utilized: coin motors, which are flat and compact, and larger cylinder motors that produced more pronounced vibrations. By adjusting the voltage and frequency of the electrical signal, the intensity and patterns of the vibrations could be manipulated, adding a dynamic element to the setup. While the range of dynamic manipulation was limited, it still offered versatility, transforming the setup into an instrument that provided an engaging experience. Additionally, this setup allows for the spatial arrangement of objects, enabling the performer to place them in various positions, thus altering the perception of the performance in a compelling manner. Moreover, I aimed to enhance the sonic and visual aspects of the performance through meticulous video recording. By capturing the performances in close-up detail, I aimed to accentuate not only the auditory elements but also provide a visually experience that complemented the intensity of the sounds, resulting in a multisensory presentation.