That’s the theory. But when she dealt more intensively with the topic of AI in exchange with the scientists from the Schaufler Lab and the TU, the scholarship holder quickly realized that this project could not be implemented during her six-month residency. Because first of all she would have had to collect vast amounts of data in various choirs. So she readjusted her project and now asks herself: Where is all our data actually located?
Which ultimately led them to a server farm that is normally only accessible to the staff there. It was extremely loud in the halls and there was a smell of burnt plastic everywhere. To her great astonishment, the sound artist with the fine hearing of a musician perceived an overwhelming variety of sounds in all this din. Sounds that were not primarily generated by the data itself, but by the components around it: the racks, the water cooling and the air blower.
Ruiz positioned microphones in front of the servers and recorded the sounds. “I can isolate the microtones in my head,” she says. “When we recorded all these different tones, we found over 25. If you see what frequency they have, they mostly don’t belong to our western music scale, they’re always between the notes.”
She later had the notes sung by an opera singer. She also tried it herself, but it was difficult to keep the tones, she reports. She is all the more looking forward to continuing to work with this machine music, to composing something from it that does not correspond to our western ideas of music, but follows a more universal idea.