Statement [June 2023]

Consider how computers interpret music—abstracting rhythm & sound to quantized sine waves & layering it back into cohesive amalgamations of sonic achievement. With this process, what do we learn about the music; perhaps even ourselves?

How can the way in which we frame music define how we engage with it?

[!] Computers that process music conceptualize songs in binary, interpreting harmonies and rhythm as latent bytes. In that sense, the reconstruction process is full of flaws, as quantizing sound tends to omit intricacies that might serve as salient elements of the music.

And this, perhaps, implies something deeper: modern computation is “soullessly” programmed to absorb, process, and create sound. Bits controlling bits without an “ear” for the music it works with. So then, our reception of the music is entirely in the hands of semiconductors and their electrical signals; in other words, we give something uniquely human to something uniquely inhuman.

Modern music is only as good as the transistors that process it. So in the end, live music will always better than recorded versions.

...right?