Discussion:
[haskell-art] ANN - csound-expression-3.1.0 - now with GUI elements
Anton Kholomiov
2013-12-12 18:54:27 UTC
Permalink
I'm glad to announce the new version of the csound-expression [1] package.
It's the library for electronic music and sound design. Here is a full
description [2]
In the new version it supports GUI-widgets. You can not only
create sound instruments but update parameters onlne with sliders,
knobs and buttons. There are many other widgets.

You can find out the details in the quick start guide [3] and the examples
[4].

Anton

[1] http://hackage.haskell.org/package/csound-expression
[2] https://github.com/anton-k/csound-expression
[3]
https://github.com/anton-k/csound-expression/blob/master/tutorial/QuickStart.markdown
[4] https://github.com/anton-k/csound-expression/tree/master/examples/Gui
Evan Laforge
2013-12-12 19:29:28 UTC
Permalink
Are there any examples of music created with this?

On Thu, Dec 12, 2013 at 10:54 AM, Anton Kholomiov
Post by Anton Kholomiov
I'm glad to announce the new version of the csound-expression [1] package.
It's the library for electronic music and sound design. Here is a full
description [2]
In the new version it supports GUI-widgets. You can not only
create sound instruments but update parameters onlne with sliders,
knobs and buttons. There are many other widgets.
You can find out the details in the quick start guide [3] and the examples
[4].
Anton
[1] http://hackage.haskell.org/package/csound-expression
[2] https://github.com/anton-k/csound-expression
[3]
https://github.com/anton-k/csound-expression/blob/master/tutorial/QuickStart.markdown
[4] https://github.com/anton-k/csound-expression/tree/master/examples/Gui
_______________________________________________
haskell-art mailing list
http://lists.lurk.org/mailman/listinfo/haskell-art
Evan Laforge
2013-12-12 19:32:01 UTC
Permalink
Post by Evan Laforge
Are there any examples of music created with this?
Well, there is the examples directory, but I was wondering about maybe
something more substantial than an example. Though now that I look,
some of those examples seem fairly elaborate...
Anton Kholomiov
2013-12-12 20:23:25 UTC
Permalink
The little piece:

https://github.com/anton-k/csound-expression/blob/master/examples/Heartbeat.hs

And one thing I used as a background ambient to play along with flute:

https://github.com/anton-k/csound-expression/blob/master/examples/Tibetan.hs

They require another package: temporal-csound

http://hackage.haskell.org/package/temporal-csound
Post by Evan Laforge
Post by Evan Laforge
Are there any examples of music created with this?
Well, there is the examples directory, but I was wondering about maybe
something more substantial than an example. Though now that I look,
some of those examples seem fairly elaborate...
_______________________________________________
haskell-art mailing list
http://lists.lurk.org/mailman/listinfo/haskell-art
Evan Laforge
2013-12-12 21:47:24 UTC
Permalink
On Thu, Dec 12, 2013 at 12:23 PM, Anton Kholomiov
Post by Anton Kholomiov
https://github.com/anton-k/csound-expression/blob/master/examples/Heartbeat.hs
I was sort of hoping for an MP3, yeah I know I'm lazy :) Ok I'll go
download it now.

csound used to be a hassle to compile back in the day, but it's
probably gotten a lot better since the days of hand-editing makefiles
and linux being an obscure poorly supported 386 unix clone.
Post by Anton Kholomiov
https://github.com/anton-k/csound-expression/blob/master/examples/Tibetan.hs
Oh, is this the background for that live performance you posted a
while back? I remember wishing I could hear the background more
clearly.
Henning Thielemann
2013-12-12 21:59:20 UTC
Permalink
Post by Evan Laforge
On Thu, Dec 12, 2013 at 12:23 PM, Anton Kholomiov
Post by Anton Kholomiov
https://github.com/anton-k/csound-expression/blob/master/examples/Heartbeat.hs
I was sort of hoping for an MP3, yeah I know I'm lazy :)
I am lazy, too, in this respect. Thus my vote for a rendered audio file.
Tristan Strange
2013-12-13 00:20:24 UTC
Permalink
Quite excited to see this. Thanks for putting it together.

But the first example you given within the tutorials not working for me...

dac $ osc 440

results in the following error message:

<interactive>:11:1:
Not in scope: `dac'
Perhaps you meant `dam' (imported from Csound.Base)

Is this likely to be something wrong with my config? I installed just as
instructed.

Cheers,
Tris
Post by Evan Laforge
On Thu, Dec 12, 2013 at 12:23 PM, Anton Kholomiov
Post by Evan Laforge
Post by Anton Kholomiov
https://github.com/anton-k/csound-expression/blob/master/
examples/Heartbeat.hs
I was sort of hoping for an MP3, yeah I know I'm lazy :)
I am lazy, too, in this respect. Thus my vote for a rendered audio file.
_______________________________________________
haskell-art mailing list
http://lists.lurk.org/mailman/listinfo/haskell-art
Tristan Strange
2013-12-13 00:26:56 UTC
Permalink
Aaaaah.... just did a cabal update, reinstalled and things are fixed!

Haskell noob here.... please excuse the noise (something I'll probably be
saying to my house mates too in a minute)
Post by Tristan Strange
Quite excited to see this. Thanks for putting it together.
But the first example you given within the tutorials not working for me...
dac $ osc 440
Not in scope: `dac'
Perhaps you meant `dam' (imported from Csound.Base)
Is this likely to be something wrong with my config? I installed just as
instructed.
Cheers,
Tris
On 12 December 2013 21:59, Henning Thielemann <
Post by Evan Laforge
On Thu, Dec 12, 2013 at 12:23 PM, Anton Kholomiov
Post by Evan Laforge
Post by Anton Kholomiov
https://github.com/anton-k/csound-expression/blob/master/
examples/Heartbeat.hs
I was sort of hoping for an MP3, yeah I know I'm lazy :)
I am lazy, too, in this respect. Thus my vote for a rendered audio file.
_______________________________________________
haskell-art mailing list
http://lists.lurk.org/mailman/listinfo/haskell-art
Anton Kholomiov
2013-12-13 07:34:26 UTC
Permalink
Rendered mp3s:

http://ge.tt/6PdK8P91

@Evan Yes that was the one I used on the gig.
Post by Tristan Strange
Aaaaah.... just did a cabal update, reinstalled and things are fixed!
Haskell noob here.... please excuse the noise (something I'll probably be
saying to my house mates too in a minute)
Post by Tristan Strange
Quite excited to see this. Thanks for putting it together.
But the first example you given within the tutorials not working for me...
dac $ osc 440
Not in scope: `dac'
Perhaps you meant `dam' (imported from Csound.Base)
Is this likely to be something wrong with my config? I installed just as
instructed.
Cheers,
Tris
On 12 December 2013 21:59, Henning Thielemann <
Post by Evan Laforge
On Thu, Dec 12, 2013 at 12:23 PM, Anton Kholomiov
Post by Evan Laforge
Post by Anton Kholomiov
https://github.com/anton-k/csound-expression/blob/master/
examples/Heartbeat.hs
I was sort of hoping for an MP3, yeah I know I'm lazy :)
I am lazy, too, in this respect. Thus my vote for a rendered audio file.
_______________________________________________
haskell-art mailing list
http://lists.lurk.org/mailman/listinfo/haskell-art
_______________________________________________
haskell-art mailing list
http://lists.lurk.org/mailman/listinfo/haskell-art
Anton Kholomiov
2013-12-13 07:42:18 UTC
Permalink
I have to admit that writing the music in text mode is far less
productive than with interactive DAWs like Cubase or Reaper
and you have no presets.

But it can be used to explore the basics of electronic music.
You can go through 'Sound on Sound' tutorials with it.

It can be used to write generative music with lot's of randomization.
One another plus is that Csound's demands on resources are very low.
I could run the examples on cheap Laptop with Atom's cpu.

Anton
Post by Anton Kholomiov
http://ge.tt/6PdK8P91
@Evan Yes that was the one I used on the gig.
Post by Tristan Strange
Aaaaah.... just did a cabal update, reinstalled and things are fixed!
Haskell noob here.... please excuse the noise (something I'll probably be
saying to my house mates too in a minute)
Post by Tristan Strange
Quite excited to see this. Thanks for putting it together.
But the first example you given within the tutorials not working for me...
dac $ osc 440
Not in scope: `dac'
Perhaps you meant `dam' (imported from Csound.Base)
Is this likely to be something wrong with my config? I installed just as
instructed.
Cheers,
Tris
On 12 December 2013 21:59, Henning Thielemann <
Post by Evan Laforge
On Thu, Dec 12, 2013 at 12:23 PM, Anton Kholomiov
Post by Evan Laforge
Post by Anton Kholomiov
https://github.com/anton-k/csound-expression/blob/master/
examples/Heartbeat.hs
I was sort of hoping for an MP3, yeah I know I'm lazy :)
I am lazy, too, in this respect. Thus my vote for a rendered audio file.
_______________________________________________
haskell-art mailing list
http://lists.lurk.org/mailman/listinfo/haskell-art
_______________________________________________
haskell-art mailing list
http://lists.lurk.org/mailman/listinfo/haskell-art
Anton Kholomiov
2013-12-13 07:43:55 UTC
Permalink
And Csound is open to other programs through the Jack interface.
Post by Anton Kholomiov
I have to admit that writing the music in text mode is far less
productive than with interactive DAWs like Cubase or Reaper
and you have no presets.
But it can be used to explore the basics of electronic music.
You can go through 'Sound on Sound' tutorials with it.
It can be used to write generative music with lot's of randomization.
One another plus is that Csound's demands on resources are very low.
I could run the examples on cheap Laptop with Atom's cpu.
Anton
Post by Anton Kholomiov
http://ge.tt/6PdK8P91
@Evan Yes that was the one I used on the gig.
Post by Tristan Strange
Aaaaah.... just did a cabal update, reinstalled and things are fixed!
Haskell noob here.... please excuse the noise (something I'll probably
be saying to my house mates too in a minute)
Post by Tristan Strange
Quite excited to see this. Thanks for putting it together.
But the first example you given within the tutorials not working for me...
dac $ osc 440
Not in scope: `dac'
Perhaps you meant `dam' (imported from Csound.Base)
Is this likely to be something wrong with my config? I installed just
as instructed.
Cheers,
Tris
On 12 December 2013 21:59, Henning Thielemann <
Post by Evan Laforge
On Thu, Dec 12, 2013 at 12:23 PM, Anton Kholomiov
Post by Evan Laforge
Post by Anton Kholomiov
https://github.com/anton-k/csound-expression/blob/master/
examples/Heartbeat.hs
I was sort of hoping for an MP3, yeah I know I'm lazy :)
I am lazy, too, in this respect. Thus my vote for a rendered audio file.
_______________________________________________
haskell-art mailing list
http://lists.lurk.org/mailman/listinfo/haskell-art
_______________________________________________
haskell-art mailing list
http://lists.lurk.org/mailman/listinfo/haskell-art
Evan Laforge
2013-12-13 21:54:40 UTC
Permalink
Nice, I like "tibetan", it's a nice exploration of the harmonic
series. It doesn't sound remotely "tibetan" but it does sound
interesting.

On Thu, Dec 12, 2013 at 11:42 PM, Anton Kholomiov
Post by Anton Kholomiov
I have to admit that writing the music in text mode is far less
productive than with interactive DAWs like Cubase or Reaper
and you have no presets.
This is my experience too (though I'm a notation guy, I tried hard
with DAWs but still found them slow and awkward). And I've never
heard any music out of csound or other text languages that isn't more
or less abstract and sound-designy. Maybe there is someone out there
that manages to do it, but I haven't heard them.

Music, as always, is largely determined by the tools used to create it.
Henning Thielemann
2013-12-13 22:07:09 UTC
Permalink
Post by Evan Laforge
This is my experience too (though I'm a notation guy, I tried hard
with DAWs but still found them slow and awkward). And I've never
heard any music out of csound or other text languages that isn't more
or less abstract and sound-designy. Maybe there is someone out there
that manages to do it, but I haven't heard them.
Music, as always, is largely determined by the tools used to create it.
At the Linux Audio Conference 2013 in Graz someone recommended in his talk
not to think of audio programs as software but as instruments. For
programs users request more and more features, whereas for instruments the
restriction on certain producible sounds is a feature.
Evan Laforge
2013-12-14 04:12:55 UTC
Permalink
On Fri, Dec 13, 2013 at 2:07 PM, Henning Thielemann
Post by Henning Thielemann
At the Linux Audio Conference 2013 in Graz someone recommended in his talk
not to think of audio programs as software but as instruments. For programs
users request more and more features, whereas for instruments the
restriction on certain producible sounds is a feature.
Stephen Tetley
2013-12-14 07:44:45 UTC
Permalink
Hi Evan

Ha, though miles away from being ready for public consumption my own
"tower of DSLs" built over Csound gained symbolic notes, chords,
trills and "Solo Parts" this week. Hopefully arpeggios, tremolos and
more should follow soon.

More concretely Roger Dannenberg (Score), Stephen Travis Pope (Smoke)
and Paul Hudak, of course, have made score languages with tangible
musical objects like chords, clusters, drum rolls etc.

Regarding your comment in the other thread Evan, David Seidel (aka
Mystery Bear) has made music with Csound that crossed over well enough
from "computer music" to feature on Kyle Gann's Postclassic radio when
it was running.

Best wishes

Stephen
Csound's instruments, once you design them, are instruments in the
restrictive sense, and in fact they come with a very limited score
language. Too limited---to use them according to "the rules", you'd
need layers of libraries and abstractions above to express notes,
phrases, ornaments, and melodies linguistically. Or you could
short-circuit all that by recording data from a physical instrument.
I'm sort of working on the first approach, but I haven't seen examples
of someone else trying that. ...
Evan Laforge
2013-12-19 03:52:00 UTC
Permalink
On Fri, Dec 13, 2013 at 11:44 PM, Stephen Tetley
Post by Stephen Tetley
Hi Evan
Ha, though miles away from being ready for public consumption my own
"tower of DSLs" built over Csound gained symbolic notes, chords,
trills and "Solo Parts" this week. Hopefully arpeggios, tremolos and
more should follow soon.
I assume this is the "Lirio" program you mentioned a while back? At
the time I recall you were mostly programmatically generating lilypond
scores, but I suppose even so you still need a way to describe the
higher level structures.

I have found that even with relatively simple things like trills,
tremolo, grace notes, and vibrato, there are many variants and
controls, not just to do with speed and articulation but also how they
interact with legato, which itself is a nontrivial
instrument-dependent topic. This all goes away in staff notation
because you can rely on the performer to supply it. But the
parameters are very interesting to play with, e.g. by tweaking the
interpretation of legato slurs you change the entire feel of a piece,
in a subtle but very recognizable way.
Post by Stephen Tetley
where chord transformations can be easily encoded[1] and a nice model
of gamelan melodies in Michael Tenzer's book "Gamelan Gong Keybar".
I was curious about that, because I too have that book. I remember
getting a little lost in that section, but also wondering what the
practical purpose of all this reduction to "normal forms" was.
Perhaps you have come up with such a practical purpose, though I still
don't fully understand it :) I've started to implement a few concepts
from Balinese music, but mostly restricted to concrete things like
performance techniques, idiomatic derivation (e.g. reyong kilitan),
and "arrival" oriented rhythm, where notes are written at the end of
their duration rather than the beginning.
Post by Stephen Tetley
More concretely Roger Dannenberg (Score), Stephen Travis Pope (Smoke)
and Paul Hudak, of course, have made score languages with tangible
musical objects like chords, clusters, drum rolls etc.
I'm familiar with Dannenberg's work on nyquist, but was unable to find
any references to Score, and it's a generic name and hard to search
for. I found a single short paper on Smoke, which I'm reading, but no
musical examples. And haskore of course I'm familiar with.
Post by Stephen Tetley
Regarding your comment in the other thread Evan, David Seidel (aka
Mystery Bear) has made music with Csound that crossed over well enough
from "computer music" to feature on Kyle Gann's Postclassic radio when
it was running.
Do you have any recommendations? I found some on his site,
http://mysterybear.net/, but it was all very much in the "abstract
sound design" genre, at least by my judgement.
Stephen Tetley
2013-12-21 21:21:54 UTC
Permalink
Hi Evan


[Comments inline... ]
Post by Evan Laforge
On Fri, Dec 13, 2013 at 11:44 PM, Stephen Tetley
Post by Stephen Tetley
Hi Evan
Ha, though miles away from being ready for public consumption my own
"tower of DSLs" built over Csound gained symbolic notes, chords,
trills and "Solo Parts" this week. Hopefully arpeggios, tremolos and
more should follow soon.
I assume this is the "Lirio" program you mentioned a while back? At
the time I recall you were mostly programmatically generating lilypond
scores, but I suppose even so you still need a way to describe the
higher level structures.
The new score language is called Majalan after Michael Tenzer's book
and targets just Csound sco files - Lirio is out to pasture at the
moment.

Grace notes didn't work well in my new score language - so I'm back to
the drawing board.
Post by Evan Laforge
Post by Stephen Tetley
where chord transformations can be easily encoded[1] and a nice model
of gamelan melodies in Michael Tenzer's book "Gamelan Gong Keybar".
I was curious about that, because I too have that book. I remember
getting a little lost in that section, but also wondering what the
practical purpose of all this reduction to "normal forms" was.
Perhaps you have come up with such a practical purpose, though I still
don't fully understand it :) I've started to implement a few concepts
from Balinese music, but mostly restricted to concrete things like
performance techniques, idiomatic derivation (e.g. reyong kilitan),
and "arrival" oriented rhythm, where notes are written at the end of
their duration rather than the beginning.
I like the simplicity of Michael Tenzer's model - it helps that he
doesn't have to account for differing note durations due to gamelan
orchestras being largely made of percussion instruments.

I never got further than a back-of-the-envelope sketch storing
intervals in lists. For me high level stuff is always pending on a
workable low-level representation - which I seem to have perennial
problems with.
Post by Evan Laforge
Post by Stephen Tetley
More concretely Roger Dannenberg (Score), Stephen Travis Pope (Smoke)
and Paul Hudak, of course, have made score languages with tangible
musical objects like chords, clusters, drum rolls etc.
I'm familiar with Dannenberg's work on nyquist, but was unable to find
any references to Score, and it's a generic name and hard to search
for. I found a single short paper on Smoke, which I'm reading, but no
musical examples. And haskore of course I'm familiar with.
"Canon" is Roger Dannenberg's score language - I don't know why I was
thinking it was called "Score" (apparently there is a LilyPond-like
program of that name).
Post by Evan Laforge
Post by Stephen Tetley
Regarding your comment in the other thread Evan, David Seidel (aka
Mystery Bear) has made music with Csound that crossed over well enough
from "computer music" to feature on Kyle Gann's Postclassic radio when
it was running.
Do you have any recommendations? I found some on his site,
http://mysterybear.net/, but it was all very much in the "abstract
sound design" genre, at least by my judgement.
David Seidel's "Elementals" album was made with Csound (and
Supercollider for one track). I wouldn't argue that it would sound
like "abstract sound design" to many people, but it is attractive
enough to catch a wider audience (vis it being featured on both Kyle
Gann's Postclassic radio and his Artstjournal blog):

http://www.stasisfield.com/releases/year07/sf-7004.html

http://www.artsjournal.com/postclassic/2009/08/im_a_little_slow.html

My personal favourite of the Csound pieces is:

http://mysterybear.net/article/43/elegy-for-jon

Best wishes

Stephen
Evan Laforge
2013-12-27 21:24:39 UTC
Permalink
On Sat, Dec 21, 2013 at 1:21 PM, Stephen Tetley
Post by Stephen Tetley
I like the simplicity of Michael Tenzer's model - it helps that he
doesn't have to account for differing note durations due to gamelan
orchestras being largely made of percussion instruments.
I disagree, notes have durations via damping and if you get them wrong
anyone familiar with the music will quickly notice and point that out.
But it's true that if you take just an abstract version of the core
melody you could reduce everything to notes of the same duration,
representing longer durations as repeated notes. But now that I think
of it that way, that's trivially true of any music. The core melody
is often very rhythmically regular, though.

The question is, what then? If you are working only at that level of
abstraction, anything recognizable about the original music is mostly
gone, and you have only a model. I would think that such a model,
disconnected with any cultural tradition, has no criteria to judge if
it's better or worse than any other arbitrary model you could invent
on your own.
Post by Stephen Tetley
"Canon" is Roger Dannenberg's score language - I don't know why I was
thinking it was called "Score" (apparently there is a LilyPond-like
program of that name).
Indeed there was a "Score", long ago. I believe it was a DOS program
that apparently produced nice scores, but was a commercial product
with one author and has presumably faded away as those tend to do.

Canon was superseded by nyquist. I'm familiar with nyquist because my
own program was originally inspired by many of its ideas, but as far
as I know its also basically pedagogical and no one has attempted
writing "expressive" music with it. My original rationale for not
just adopting nyquist was that it would need a giant library of
abstractions built on top, dwarfing the size of nyquist itself, and
I'd rather write those in Haskell than an obscure lisp variant.
Post by Stephen Tetley
David Seidel's "Elementals" album was made with Csound (and
Supercollider for one track). I wouldn't argue that it would sound
like "abstract sound design" to many people, but it is attractive
enough to catch a wider audience (vis it being featured on both Kyle
http://www.stasisfield.com/releases/year07/sf-7004.html
http://www.artsjournal.com/postclassic/2009/08/im_a_little_slow.html
Thanks for the links, I agree it's very attractive, but that's
"abstract sound design" to me. Now that I think about it, sitting on
a bunch of just tuned chords might be a form of music that relies very
little on cultural background, i.e. it can sound pleasant without
needing training in the kind of music. Still I think that the kind of
listening for this kind of music is different than the kind engaged
when listening to music from a developed tradition. It's lower level
and hence needs fewer rules, almost as if you listened to a poem
solely for rhythm and euphony, without needing to understand the
language and meaning.

I was curious not about "abstract but accessible", but about
"expressive" in the traditional with-notes way that a string quartet
(or gamelan, or whatever other ensemble) is. I.e. lots of notes with
tempo, dynamics, intonation, and timbral control according to the set
of rules that someone familiar with acoustic instruments would
recognize as "expressive", or even performance quality.

The examples I can find are mostly demos of sample libraries, which
can do pretty well at imitating certain kinds of music. Many of the
expressive details are built in to the samples, but as the number of
articulations increases, controlling them eventually becomes a problem
akin to controlling parameters on a synthesizer, so that's interesting
too. No one can avoid the problem of having to generate lots of
complicated data, whether it be control signals or keyswitches, but
everyone seems to be doing it manually, or recording in real-time.
Also, the sample based stuff only really sounds good when it's a large
ensemble, which lets you get away with a lot of simplification for the
individual parts. A single solo instrument would be a real challenge.
Post by Stephen Tetley
http://mysterybear.net/article/43/elegy-for-jon
Unfortunately the links on this one are dead. I searched around a
bit, but it all links back to that page with the dead links. Perhaps
I should email him and let him know...
Rohan Drape
2013-12-19 04:52:53 UTC
Permalink
[...]
I'm sort of working on the first approach, but I haven't seen examples
of someone else trying that.
[...]

i'm not sure i follow precisely, & i know nothing about this area, but
i'm pretty sure people are still working on it.

perhaps see:

Kirke, A. and Miranda, E. (Eds.) "Guide to Computing for Expressive
Music Performance", Springer, 2013

http://www.springer.com/computer/information+systems+and+applications/book/978-1-4471-4122-8

best,
rd
Evan Laforge
2013-12-20 00:51:27 UTC
Permalink
Post by Rohan Drape
Kirke, A. and Miranda, E. (Eds.) "Guide to Computing for Expressive
Music Performance", Springer, 2013
Indeed, more than I thought, and there's a whole textbook on it! Even
with just google books snippets there are references to various
programs I've never heard of, e.g. www.rubato.org. I still haven't
figured out exactly what it does, and of course there's no actual
music, but the "documentation" (thesis) starts right off with category
theory, so it must be great :)

Thanks for the reference!
Hudak, Paul
2013-12-20 23:15:28 UTC
Permalink
Guerino Mazzola has done a lot of work in this area (using category theory to describe music and performance). He has a couple of textbooks, many papers, and most recently this article:

http://link.springer.com/chapter/10.1007%2F978-3-642-39357-0_2

Warning: it's fairly abstract!

On a more practical level, I wanted to mention that in Euterpea a user can define a notion of a "player" that interprets note and phrase attributes in a "player dependent" manner. For example, one could define a piano player and a violin player that each interpret legato, crescendo, trills, and so on, in different ways. One of the coolest uses of this idea is the definition of a "jazz player" that interprets a piece of music in a "swing" style (e.g. interpreting a pair of eighth notes as a triplet of sixteenth notes, etc.). You can have as many players in a composition as you like.

All of this is described in Chapter 8 of http://haskell.cs.yale.edu/?post_type=publication&p=112.

Best, -Paul

Paul Hudak
Professor of Computer Science
Yale University, PO Box 208285
New Haven, CT 06520-8285, 203-432-1235 

-----Original Message-----
From: Evan Laforge [mailto:***@gmail.com]
Sent: Thursday, December 19, 2013 7:51 PM
To: haskell-***@lurk.org
Subject: Re: [haskell-art] abstract music from csound et.al. (Was: ANN - csound-expression-3.1.0)
Post by Rohan Drape
Kirke, A. and Miranda, E. (Eds.) "Guide to Computing for Expressive
Music Performance", Springer, 2013
Indeed, more than I thought, and there's a whole textbook on it! Even
with just google books snippets there are references to various
programs I've never heard of, e.g. www.rubato.org. I still haven't
figured out exactly what it does, and of course there's no actual
music, but the "documentation" (thesis) starts right off with category
theory, so it must be great :)

Thanks for the reference!
Evan Laforge
2013-12-21 09:30:02 UTC
Permalink
Post by Hudak, Paul
http://link.springer.com/chapter/10.1007%2F978-3-642-39357-0_2
Warning: it's fairly abstract!
Indeed, it looks over my head, but even if it weren't, it might be too
general to be directly applicable. Also all these papers range from
$30 to $150 which is pretty discouraging especially if I'm not even
sure I'll understand it in the first place, or find it useful even if
I did.

It's frustrating that I almost never find musical examples to listen
to. Doesn't that indicate success or failure of the whole experiment?
How are you supposed to judge some new system or theory without
presenting some music that would have not been practically possible
without it?


One thing I've been thinking about is that many music effects have a
non-local effect. For instance, I was implementing a certain kind of
vocal trill that's common in Carnatic music. Sometimes they end on
the unison note, and sometimes on the neighbor note. You can
accomplish that by changing the number of cycles (i.e. drop cycles
from the end until you're ending on the neighbor), or you could tweak
the speed to get it to end where you want. Or, you could lengthen the
note and cause it to push back the next note if necessary, or it could
possible slow the tempo a little. In effect, an ornament causes one
instrument to hold a note longer, and the other instruments wait a
bit. Whether or not a note is willing to move its attack depends on
how important it thinks it is, which depends on where in the beat it
falls, and if it has an accent.

Real ensembles do this kind of thing all the time, so it's probably
not a curiosity, but an essential feature. However, my program
doesn't handle it very well. You'd have to do an initial pass to
collect non-local effects, reconcile them, and then perform for real.
It's easy to get the idea that the actual output of a score should be
a set of requirements and tendencies that then have to be reconciled
with each other. The actual notes then fall out of that. They may
even conflict, e.g. a falling grace note emphasizing A (B, A) would
sound weak if the previous pitch is B, so if the preceding trill (B,
C), doesn't mind, it should end on the upper note. But if the trill
has been specified to end on the lower note, the grace note may have
to adjust to become (C, A) (assuming it's constrained to remain
diatonic and you're in C). This kind of thing happens all the time,
and the more I think of it the more examples come up. Even a note's
pitch is just a strong desire to be at a certain frequency, which can
be weakened if an instrument has a constraint on accuracy vs. speed
and there are other stronger constraints to play quickly.

It makes me worry that my whole approach, where generally each note
produces its own output and doesn't affect its siblings, is
fundamentally too inflexible. But I have little idea of what such a
constraint framework would look like, and how it would be composed and
controlled.

Can Euterpea express things like that? Or perhaps are there existing
systems that work like that? Surely there must be.
Post by Hudak, Paul
On a more practical level, I wanted to mention that in Euterpea a user can define a notion of a "player" that interprets note and phrase attributes in a "player dependent" manner. For example, one could define a piano player and a violin player that each interpret legato, crescendo, trills, and so on, in different ways. One of the coolest uses of this idea is the definition of a "jazz player" that interprets a piece of music in a "swing" style (e.g. interpreting a pair of eighth notes as a triplet of sixteenth notes, etc.). You can have as many players in a composition as you like.
I do something like this, though implemented pretty differently. At
least, I have many ways to play legato, or a trill, and they can be
overridden based on instrument or section. But I haven't thought much
about how to do the non-local thing. Can Euterpea's players
coordinate among each other? Or perhaps have a higher level ensemble
player to collect the various tempo effects and reconcile them? I
don't know much about jazz, but I imagine there's a lot more to the
rhythm than just warping the beats a bit! Or, within a single
instrument, how would you handle the trill example?
Hudak, Paul
2013-12-27 20:58:02 UTC
Permalink
Hi Evan. I agree that hearing the results of a particular theory is important to assessing its effectiveness, at least for generative theories. But there's a ton of music theory out there that only addresses analysis -- i.e. gives a theoretical interpretation of existing music. My impression is that that is the primary objective of most music theorists.

Your point about non-local effects is well taken. It's fairly easy to define a trill function that is parameterized with the rate, trill interval, and so on (and I do so in my book HSoM), but if you want that behavior to have a non-local effect, it's much more difficult, as you point out. Players in Euterpea cannot do this. I did write a (not so great) paper quite a while ago (see http://haskell.cs.yale.edu/?post_type=publication&p=255) that treats the issue as a set of mutually recursive processes, but it just scratches the surface and I never followed up on it. I think it's a cool problem waiting for an elegant solution.

Best, -Paul

-----Original Message-----
From: Evan Laforge [mailto:***@gmail.com]
Sent: Saturday, December 21, 2013 4:30 AM
To: haskell-***@lurk.org
Subject: Re: [haskell-art] abstract music from csound et.al. (Was: ANN - csound-expression-3.1.0)
Post by Hudak, Paul
http://link.springer.com/chapter/10.1007%2F978-3-642-39357-0_2
Warning: it's fairly abstract!
Indeed, it looks over my head, but even if it weren't, it might be too
general to be directly applicable. Also all these papers range from
$30 to $150 which is pretty discouraging especially if I'm not even
sure I'll understand it in the first place, or find it useful even if
I did.

It's frustrating that I almost never find musical examples to listen
to. Doesn't that indicate success or failure of the whole experiment?
How are you supposed to judge some new system or theory without
presenting some music that would have not been practically possible
without it?


One thing I've been thinking about is that many music effects have a
non-local effect. For instance, I was implementing a certain kind of
vocal trill that's common in Carnatic music. Sometimes they end on
the unison note, and sometimes on the neighbor note. You can
accomplish that by changing the number of cycles (i.e. drop cycles
from the end until you're ending on the neighbor), or you could tweak
the speed to get it to end where you want. Or, you could lengthen the
note and cause it to push back the next note if necessary, or it could
possible slow the tempo a little. In effect, an ornament causes one
instrument to hold a note longer, and the other instruments wait a
bit. Whether or not a note is willing to move its attack depends on
how important it thinks it is, which depends on where in the beat it
falls, and if it has an accent.

Real ensembles do this kind of thing all the time, so it's probably
not a curiosity, but an essential feature. However, my program
doesn't handle it very well. You'd have to do an initial pass to
collect non-local effects, reconcile them, and then perform for real.
It's easy to get the idea that the actual output of a score should be
a set of requirements and tendencies that then have to be reconciled
with each other. The actual notes then fall out of that. They may
even conflict, e.g. a falling grace note emphasizing A (B, A) would
sound weak if the previous pitch is B, so if the preceding trill (B,
C), doesn't mind, it should end on the upper note. But if the trill
has been specified to end on the lower note, the grace note may have
to adjust to become (C, A) (assuming it's constrained to remain
diatonic and you're in C). This kind of thing happens all the time,
and the more I think of it the more examples come up. Even a note's
pitch is just a strong desire to be at a certain frequency, which can
be weakened if an instrument has a constraint on accuracy vs. speed
and there are other stronger constraints to play quickly.

It makes me worry that my whole approach, where generally each note
produces its own output and doesn't affect its siblings, is
fundamentally too inflexible. But I have little idea of what such a
constraint framework would look like, and how it would be composed and
controlled.

Can Euterpea express things like that? Or perhaps are there existing
systems that work like that? Surely there must be.
Post by Hudak, Paul
On a more practical level, I wanted to mention that in Euterpea a user can define a notion of a "player" that interprets note and phrase attributes in a "player dependent" manner. For example, one could define a piano player and a violin player that each interpret legato, crescendo, trills, and so on, in different ways. One of the coolest uses of this idea is the definition of a "jazz player" that interprets a piece of music in a "swing" style (e.g. interpreting a pair of eighth notes as a triplet of sixteenth notes, etc.). You can have as many players in a composition as you like.
I do something like this, though implemented pretty differently. At
least, I have many ways to play legato, or a trill, and they can be
overridden based on instrument or section. But I haven't thought much
about how to do the non-local thing. Can Euterpea's players
coordinate among each other? Or perhaps have a higher level ensemble
player to collect the various tempo effects and reconcile them? I
don't know much about jazz, but I imagine there's a lot more to the
rhythm than just warping the beats a bit! Or, within a single
instrument, how would you handle the trill example?
Evan Laforge
2013-12-28 01:29:41 UTC
Permalink
Post by Hudak, Paul
Hi Evan. I agree that hearing the results of a particular theory is important to assessing its effectiveness, at least for generative theories. But there's a ton of music theory out there that only addresses analysis -- i.e. gives a theoretical interpretation of existing music. My impression is that that is the primary objective of most music theorists.
That's probably true. I suppose if to some people the end result of
music an engaging sound, to others the end result is an engaging piece
of analysis, or an interesting theory about how to generate sound,
even if never used (or unusable!). In the end people like all kinds
of different things.
Post by Hudak, Paul
Your point about non-local effects is well taken. It's fairly easy to define a trill function that is parameterized with the rate, trill interval, and so on (and I do so in my book HSoM), but if you want that behavior to have a non-local effect, it's much more difficult, as you point out. Players in Euterpea cannot do this. I did write a (not so great) paper quite a while ago (see http://haskell.cs.yale.edu/?post_type=publication&p=255) that treats the issue as a set of mutually recursive processes, but it just scratches the surface and I never followed up on it. I think it's a cool problem waiting for an elegant solution.
Interesting, and thanks for the link to the paper. It is true, it's
just a sketch, but it's good to see a different approach, since I was
thinking in terms of emitting constraints to be later resolved by a
"perform" function. But I suppose if constraint resolution can imply
new notes and thus new constraints you have to run it again and then
you get the finding a fixpoint problem. I haven't actually given it
much thought yet.

Do you know of any existing work along these lines, especially on the
practical side? It seems like an obvious direction to go in once you
start thinking about how the structure of music works along with
notation and performance, so surely lots of people have thought of it.
Hudak, Paul
2013-12-28 16:52:06 UTC
Permalink
Do you know of any existing work along these lines, especially on the practical side? It seems like an obvious direction to go in once you start thinking about how the structure of music works along with notation and performance, so surely lots of people have thought of it.



No, I don’t, but it seems like there must be, for such an important problem. Let me know if you find anything!



Best, -Paul
alex
2013-12-28 16:59:47 UTC
Permalink
Post by Evan Laforge
Do you know of any existing work along these lines, especially on the
practical side? It seems like an obvious direction to go in once you
start thinking about how the structure of music works along with
notation and performance, so surely lots of people have thought of it.
I think you'd find Bernard Bel's work on time setting in the Bol
Processor interesting, for example:
http://hal.archives-ouvertes.fr/docs/00/13/41/79/PDF/1119.pdf

I haven't managed to catch up with this thread properly yet, but I
think he's doing the kind of non-local effects you are looking for.

alex
--
http://yaxu.org/
alex
2013-12-28 19:23:11 UTC
Permalink
Post by alex
I think you'd find Bernard Bel's work on time setting in the Bol
http://hal.archives-ouvertes.fr/docs/00/13/41/79/PDF/1119.pdf
Also the KTH performance rules, if you hadn't seen them:
http://www.speech.kth.se/music/performance/performance_rules.html

best wishes

alex
Stephen Tetley
2013-12-31 16:01:36 UTC
Permalink
Hi Evan

Maybe there is something to be mined from the "OM-Chroma" research at IRCAM?

http://repmus.ircam.fr/cao/omchroma

It looks like they are using different timing resolutions to get
expressive control of "synthesizers" - in this case via Csound.

Best wishes and Happy New Year to everyone on the list.

Stephen
Post by Evan Laforge
Do you know of any existing work along these lines, especially on the
practical side? It seems like an obvious direction to go in once you
start thinking about how the structure of music works along with
notation and performance, so surely lots of people have thought of it.
Evan Laforge
2014-01-02 03:36:54 UTC
Permalink
Interesting, thanks for the link. This system is new to me. Follows
is just me thinking out loud, trying to understand what it means, I'm
not sure if it's actually interesting to anyone on this list, so skip
if you want...

I'm not sure if the paper is written unclearly, or if I'm just not
good at understanding academic papers.
Loading...