Hi Evan. I agree that hearing the results of a particular theory is important to assessing its effectiveness, at least for generative theories. But there's a ton of music theory out there that only addresses analysis -- i.e. gives a theoretical interpretation of existing music. My impression is that that is the primary objective of most music theorists.
Your point about non-local effects is well taken. It's fairly easy to define a trill function that is parameterized with the rate, trill interval, and so on (and I do so in my book HSoM), but if you want that behavior to have a non-local effect, it's much more difficult, as you point out. Players in Euterpea cannot do this. I did write a (not so great) paper quite a while ago (see http://haskell.cs.yale.edu/?post_type=publication&p=255) that treats the issue as a set of mutually recursive processes, but it just scratches the surface and I never followed up on it. I think it's a cool problem waiting for an elegant solution.
Best, -Paul
-----Original Message-----
From: Evan Laforge [mailto:***@gmail.com]
Sent: Saturday, December 21, 2013 4:30 AM
To: haskell-***@lurk.org
Subject: Re: [haskell-art] abstract music from csound et.al. (Was: ANN - csound-expression-3.1.0)
Post by Hudak, Paulhttp://link.springer.com/chapter/10.1007%2F978-3-642-39357-0_2
Warning: it's fairly abstract!
Indeed, it looks over my head, but even if it weren't, it might be too
general to be directly applicable. Also all these papers range from
$30 to $150 which is pretty discouraging especially if I'm not even
sure I'll understand it in the first place, or find it useful even if
I did.
It's frustrating that I almost never find musical examples to listen
to. Doesn't that indicate success or failure of the whole experiment?
How are you supposed to judge some new system or theory without
presenting some music that would have not been practically possible
without it?
One thing I've been thinking about is that many music effects have a
non-local effect. For instance, I was implementing a certain kind of
vocal trill that's common in Carnatic music. Sometimes they end on
the unison note, and sometimes on the neighbor note. You can
accomplish that by changing the number of cycles (i.e. drop cycles
from the end until you're ending on the neighbor), or you could tweak
the speed to get it to end where you want. Or, you could lengthen the
note and cause it to push back the next note if necessary, or it could
possible slow the tempo a little. In effect, an ornament causes one
instrument to hold a note longer, and the other instruments wait a
bit. Whether or not a note is willing to move its attack depends on
how important it thinks it is, which depends on where in the beat it
falls, and if it has an accent.
Real ensembles do this kind of thing all the time, so it's probably
not a curiosity, but an essential feature. However, my program
doesn't handle it very well. You'd have to do an initial pass to
collect non-local effects, reconcile them, and then perform for real.
It's easy to get the idea that the actual output of a score should be
a set of requirements and tendencies that then have to be reconciled
with each other. The actual notes then fall out of that. They may
even conflict, e.g. a falling grace note emphasizing A (B, A) would
sound weak if the previous pitch is B, so if the preceding trill (B,
C), doesn't mind, it should end on the upper note. But if the trill
has been specified to end on the lower note, the grace note may have
to adjust to become (C, A) (assuming it's constrained to remain
diatonic and you're in C). This kind of thing happens all the time,
and the more I think of it the more examples come up. Even a note's
pitch is just a strong desire to be at a certain frequency, which can
be weakened if an instrument has a constraint on accuracy vs. speed
and there are other stronger constraints to play quickly.
It makes me worry that my whole approach, where generally each note
produces its own output and doesn't affect its siblings, is
fundamentally too inflexible. But I have little idea of what such a
constraint framework would look like, and how it would be composed and
controlled.
Can Euterpea express things like that? Or perhaps are there existing
systems that work like that? Surely there must be.
Post by Hudak, PaulOn a more practical level, I wanted to mention that in Euterpea a user can define a notion of a "player" that interprets note and phrase attributes in a "player dependent" manner. For example, one could define a piano player and a violin player that each interpret legato, crescendo, trills, and so on, in different ways. One of the coolest uses of this idea is the definition of a "jazz player" that interprets a piece of music in a "swing" style (e.g. interpreting a pair of eighth notes as a triplet of sixteenth notes, etc.). You can have as many players in a composition as you like.
I do something like this, though implemented pretty differently. At
least, I have many ways to play legato, or a trill, and they can be
overridden based on instrument or section. But I haven't thought much
about how to do the non-local thing. Can Euterpea's players
coordinate among each other? Or perhaps have a higher level ensemble
player to collect the various tempo effects and reconcile them? I
don't know much about jazz, but I imagine there's a lot more to the
rhythm than just warping the beats a bit! Or, within a single
instrument, how would you handle the trill example?