Discussion:
unknown
1970-01-01 00:00:00 UTC
Permalink
Then it has a visual boxes-and-lines language, which connects signals
to inputs, like those box-and-lines audio languages all do. Then it
has "maquettes", which seem to let you place these boxes of control
signals in time, though they can also depend on each other in some
way. So I guess this is notes on a timeline.

Then there's a "chroma model", which is... well it seems intriguing if
it's "meant to bridge the gap between a symbolic melody- or
harmony-oriented approach to composition and numeric, spectral
materials" but I can't figure out what it is. Apparently you can load
your basic "bunch of sine waves" spectral data, modify it in the usual
ways, and then export it as controls for the instruments.

So I have trouble telling what this thing actually is. Apparently a
way to map spectral data to synthesizer parameters plus a
lines-and-boxes visual language, built on common lisp (so presumably
it's extensible, they hint at that saying how you can subclass
things). So you could use it to extract "interesting" data from FFT
and turn that into your control data... but it doesn't address what
you might do with that, or why you might want to do that.

It's intriguing how it talks about bridging the gap between a
"instrumental approach and a synthesis-oriented approach" which is not
too clear but I guess means a synthesis language that can be used for
writing notes too. But to me the key to the whole thing is how you
actually generate the control data. Without that, it's just arbitrary
"connect this input to that thing", which we could all do already.
Put differently, if you say "you can connect FFT output to control
input" that's not very interesting, if you say "connecting FFT pitches
to pitch control and levels to grain density makes a nice expressive
sounding note" that's interesting but still not useful, but if you say
"so we recorded a library of FFT results and mapped them to different
kinds of expressive notes with such and such names and such and such
techniques for smoothly splicing them into each other given a score of
names and it sounds really amazing", now we're getting somewhere.

I didn't notice where the bit about "different timing resolutions"
was... there was a paragraph where they talked about "smooth" vs.
"striated" time, but as far as I can tell this is just philosophical
musing, though in the conclusion they say they want to support a
"phrase-based" conception of time, whatever that may mean.

It looks like their site is at
http://forumnet.ircam.fr/product/openmusic/, I'm sure if I download it
and play with it things will be much clearer.

The Bol Processor stuff is also interesting (it seems to be
maintained, and source is available). I have yet to understand the
documentation, but the idea of using a grammar to specify which bits
of notation can fit where is interesting. Notes have a syntax: there
are ornaments or articulations only valid at the attack time, ones
that apply to the sustain, and ones that serve as transitions to the
next note. Also, the shapes of ornaments vary based on the note or
absence of a note preceding and following, in addition to the speed.
It makes me think of cursive Arabic, where letters change shape and
placement based on the previous letter, along with rules about which
letters go where in the word (I'm sure linguists have a name for this,
e.g. English has "ng", but won't start a word with it). I've noticed
there's a tension between specifying exact times via a timeline or
whatever, and the kind of higher level flexibility implied by a
syntactic approach. E.g. if you say "attack X, sustain Y, end with
Z", you are not saying exactly when X, Y, and Z start and end, and
they are free to arrange themselves according to context. But you do
need a certain amount of precise control over times, at least in some
cases. Maybe this is what the IRCAM guys mean about "phrase-based"
time.

On Tue, Dec 31, 2013 at 8:01 AM, Stephen Tetley
Hi Evan
Maybe there is something to be mined from the "OM-Chroma" research at IRCAM?
http://repmus.ircam.fr/cao/omchroma
It looks like they are using different timing resolutions to get
expressive control of "synthesizers" - in this case via Csound.
Best wishes and Happy New Year to everyone on the list.
Stephen
Do you know of any existing work along these lines, especially on the
practical side? It seems like an obvious direction to go in once you
start thinking about how the structure of music works along with
notation and performance, so surely lots of people have thought of it.
_______________________________________________
haskell-art mailing list
http://lists.lurk.org/mailman/listinfo/haskell-art
Continue reading on narkive:
Loading...