Year: 2024
Programs: VCV Rack, Arduino, Max/MSP, TouchDesigner
For my thesis project, I built a customizable instrument that transfers any kind of imagery into sound. The inspiration of my project comes from the realization that I wanted to make music for a long time, but I have no idea where to start. This is because first I don’t have any kind of knowledge or training in music before. Secondly, I am used to making art and thinking about art through visual form. When I see an image that I enjoyed, I could remember it for a long time or even forever, but if I hear a song that I enjoyed, I might just totally forget about the rhythm of the song 20 minutes later, and even when I do remember the rhythm, it’s probably because I also like the album cover or the music video of the song.
This is also the case for many people, because we perceive much more information from seeing than hearing, and this leads to the fact that people have a much higher acceptability to the quality of visual art compared to music. For example, anyone who has never drawn before can pick up a pen and start right away, while still ending up with something presentable. On the other hand, if someone wants to make music, it is common sense that you need at least some basic skill with an instrument to start. This is where I start to think about what if I combine the process of making visual art with making music, in order to create an instrument that anyone can start to play without any knowledge in music. My final answer to this idea is to make an instrument that generates sound from imagery, which users can compose their own music through making their own imagery and customize the sound of this instrument.