unknown
1970-01-01 00:00:00 UTC
The software programs do provide restrictions of their own, but are
disconnected from the cultural context since they didn't evolve
together. Popular electronic music evolved along with the
restrictions of keyboard synthesizers and hardware sequencers from the
70s, and now they co-evolve to support each other, as established
traditions do. The synthesis languages lived mostly in academia and
had fewer restrictions and thus a harder job, and haven't really done
anything like that. It's probably also influenced by the academic
idea that you create your own uncompromising aesthetic with little
reference to historical practice. They might see it as an advantage
that you can't easily express the same old tunes!
So I would say that thinking of software as instruments is the wrong
level, you would use the software to build instruments, in the same
way that you use the instruments to build music, by applying higher
and higher levels of rules and conventions. The promise of software
is that they all exist in the same system, not distributed across
instrument, composer, notation, and performance.
Csound's instruments, once you design them, are instruments in the
restrictive sense, and in fact they come with a very limited score
language. Too limited---to use them according to "the rules", you'd
need layers of libraries and abstractions above to express notes,
phrases, ornaments, and melodies linguistically. Or you could
short-circuit all that by recording data from a physical instrument.
I'm sort of working on the first approach, but I haven't seen examples
of someone else trying that. Without either the first or the second
you're stuck at "assembly language" level, and the only thing easily
expressible is the sound-scapey, generative, or otherwise randomized
or repetitive music. Not that it's bad, I like sound-scapes too, but
since they follow fewer rules I contend that their enjoyment is also
less nuanced.
disconnected from the cultural context since they didn't evolve
together. Popular electronic music evolved along with the
restrictions of keyboard synthesizers and hardware sequencers from the
70s, and now they co-evolve to support each other, as established
traditions do. The synthesis languages lived mostly in academia and
had fewer restrictions and thus a harder job, and haven't really done
anything like that. It's probably also influenced by the academic
idea that you create your own uncompromising aesthetic with little
reference to historical practice. They might see it as an advantage
that you can't easily express the same old tunes!
So I would say that thinking of software as instruments is the wrong
level, you would use the software to build instruments, in the same
way that you use the instruments to build music, by applying higher
and higher levels of rules and conventions. The promise of software
is that they all exist in the same system, not distributed across
instrument, composer, notation, and performance.
Csound's instruments, once you design them, are instruments in the
restrictive sense, and in fact they come with a very limited score
language. Too limited---to use them according to "the rules", you'd
need layers of libraries and abstractions above to express notes,
phrases, ornaments, and melodies linguistically. Or you could
short-circuit all that by recording data from a physical instrument.
I'm sort of working on the first approach, but I haven't seen examples
of someone else trying that. Without either the first or the second
you're stuck at "assembly language" level, and the only thing easily
expressible is the sound-scapey, generative, or otherwise randomized
or repetitive music. Not that it's bad, I like sound-scapes too, but
since they follow fewer rules I contend that their enjoyment is also
less nuanced.