Discussion:
[haskell-art] A framework for manipulating musical pitches, intervals and durations
Edward Lilley
2013-08-31 19:04:41 UTC
Permalink
Greetings everyone. This is my first mailing list post, so apologies if
I'm unwittingly off-topic etc.

I've recently been working on a little project to aid my own
explorations into computer-aided composition & music
representation. I've (rather blandly) named it 'AbstractMusic', and it
can be found on Github[1].

It's designed around the idea of separating the process of producing
music into different layers of abstraction; each layer of abstraction
having its own idea of what a pitch, interval and duration is; and a
triple of (pitch, interval, duration) types can be used to create a
'note' type.

For example, among the pitch types there are:
- AbstractPitch1, which represents scale degrees
- AbstractPitch2, ordinary pitches (derived from AbstractPitch1 by
applying a scale)
- AbstractPitch3, a frequency in Hertz (derived from AbstractPitch2 by
applying a tuning system)

Each type makes as few assumptions as possible about what it might be
transformed into -- in particular, AbstractPitch2 does not assume
12-equal temperament, so C-sharps are distinct from D-flats -- it's up
to the tuning system to determine how to deal with
accidentals/augmentation/diminution etc.

See the file Tuning.hs for implementations of some common syntonic
(meantone) and equal temperaments. The internal representation of
pitches and intervals is as a free Abelian group -- i.e. a pair of
integers representing that interval in a particular basis. This means
that most common tuning systems can be implemented quite succinctly as
some linear transformation of that internal representation.

I've produced one major example so far, which a piece of keyboard music
by Guillaume Costeley which uses 19-equal temperament. See the file
Costeley.lhs, or this[2] page for synthesised (Csound) and typeset
(Lilypond) output. Please see the files Examples.hs and Canon.hs for
more examples, e.g. a fragment of Pachelbel's canon.

(You may notice that I'm no expert in Csound -- both the Csound and the
Lilypond output are rather primitive at the moment)


Edward Lilley



[1]: https://github.com/ejlilley/AbstractMusic
[2]: http://www.ugnus.uk.eu.org/~edward/costeley/
Franco
2013-08-31 23:09:52 UTC
Permalink
This looks *great*!
For now I am just checking the output(s), but I am pretty interested in the
code too!
Evan Laforge
2013-09-03 19:57:45 UTC
Permalink
It's undocumented, so it's hard for me to read. Also the math terms
are a bit over my head, that might be part of it. But it's
interesting, because I've been working on a similar system. Well,
similar only in that in order to represent pitches I wind up with high
and low level representations, but I think the actual realization of
that is totally different.

I just have a pitch as a function which takes a key and a set of
transposition signals (e.g. diatonic, chromatic, cents, hz) to a
frequency, and then a scale is just a Map String Pitch. So I think
it's more flexible than yours since the pitch is very general, e.g. it
can respond to any kind of transposition or key it feels like, and
e.g. retune over time or relative to the intonation of other pitches.
But then less flexible since it's a function, so you can only
manipulated it through its arguments. That makes a lot of analysis
impossible. I think this tradeoff between code and data is
fundamental.

Can you talk about why you chose this particular representation and
what it allows you to do? I guess you could write transformations at
various levels, e.g. diatonic transposition at AbstractPitch1, or
something else at AbstractPitch2 (I don't know what an "ordinary
pitch" is), and perhaps impose vibrato or something at AbstractPitch3
(though your pitches seem to be discrete and not signals, though I
guess you could just make a list of them). But these uses probably
aren't what you had in mind. The example piece seems like it's just a
note list, so it doesn't really demonstrate the interesting things
made possible by the three layer thing.

Also, it seems like you're focusing on scales with 7 diatonic steps?
The Name type with its A-G implies that it's hardcoded that way.
Post by Edward Lilley
Greetings everyone. This is my first mailing list post, so apologies if
I'm unwittingly off-topic etc.
I've recently been working on a little project to aid my own
explorations into computer-aided composition & music
representation. I've (rather blandly) named it 'AbstractMusic', and it
can be found on Github[1].
It's designed around the idea of separating the process of producing
music into different layers of abstraction; each layer of abstraction
having its own idea of what a pitch, interval and duration is; and a
triple of (pitch, interval, duration) types can be used to create a
'note' type.
- AbstractPitch1, which represents scale degrees
- AbstractPitch2, ordinary pitches (derived from AbstractPitch1 by
applying a scale)
- AbstractPitch3, a frequency in Hertz (derived from AbstractPitch2 by
applying a tuning system)
Each type makes as few assumptions as possible about what it might be
transformed into -- in particular, AbstractPitch2 does not assume
12-equal temperament, so C-sharps are distinct from D-flats -- it's up
to the tuning system to determine how to deal with
accidentals/augmentation/diminution etc.
See the file Tuning.hs for implementations of some common syntonic
(meantone) and equal temperaments. The internal representation of
pitches and intervals is as a free Abelian group -- i.e. a pair of
integers representing that interval in a particular basis. This means
that most common tuning systems can be implemented quite succinctly as
some linear transformation of that internal representation.
I've produced one major example so far, which a piece of keyboard music
by Guillaume Costeley which uses 19-equal temperament. See the file
Costeley.lhs, or this[2] page for synthesised (Csound) and typeset
(Lilypond) output. Please see the files Examples.hs and Canon.hs for
more examples, e.g. a fragment of Pachelbel's canon.
(You may notice that I'm no expert in Csound -- both the Csound and the
Lilypond output are rather primitive at the moment)
Edward Lilley
[1]: https://github.com/ejlilley/AbstractMusic
[2]: http://www.ugnus.uk.eu.org/~edward/costeley/
_______________________________________________
haskell-art mailing list
http://lists.lurk.org/mailman/listinfo/haskell-art
Edward Lilley
2013-09-04 15:17:51 UTC
Permalink
Hi!
Post by Evan Laforge
It's undocumented, so it's hard for me to read. Also the math terms
are a bit over my head, that might be part of it. But it's
interesting, because I've been working on a similar system. Well,
similar only in that in order to represent pitches I wind up with high
and low level representations, but I think the actual realization of
that is totally different.
I hope to write some proper documentation soon -- you may notice that
some of the comments have turned into mini-essays, in preparation for
this. In the latest commit I've tried to make the organisation of
Music.hs a little more obvious, and Examples.hs a bit more helpful.

I don't consider myself to have much of a head for mathematics, but
having read up on the relevant bits, I decided that -- given that it was
simple enough for me to understand -- there was no reason not to at
least write down the relevant formalisation (and formalising things
mathematically is The Haskell Way, right? :-)).
Post by Evan Laforge
I just have a pitch as a function which takes a key and a set of
transposition signals (e.g. diatonic, chromatic, cents, hz) to a
frequency, and then a scale is just a Map String Pitch. So I think
it's more flexible than yours since the pitch is very general, e.g. it
can respond to any kind of transposition or key it feels like, and
e.g. retune over time or relative to the intonation of other pitches.
But then less flexible since it's a function, so you can only
manipulated it through its arguments. That makes a lot of analysis
impossible. I think this tradeoff between code and data is
fundamental.
Pitch-as-a-function is a nice idea -- but, as you suggest, I'm looking
for something which gives me analysis/manipulation-flexibility, at the
expense of representation-flexibility.
Evan Laforge
2013-09-04 19:58:17 UTC
Permalink
Post by Edward Lilley
I don't consider myself to have much of a head for mathematics, but
having read up on the relevant bits, I decided that -- given that it was
simple enough for me to understand -- there was no reason not to at
least write down the relevant formalisation (and formalising things
mathematically is The Haskell Way, right? :-)).
True, and I should probably also make the effort to learn some basic
vocabulary about sets and groups, since those seem to come up a lot in
programming.
Post by Edward Lilley
The main thrust is that, quite apart from tuning systems being
representable as groups with one generator (equal temperaments) or two
generators (meantone temperaments) (or more generators), the *notation*
of intervals in common musical practice defines a two-generator group
regardless of the tuning system in use[2].
Indeed, and even in equal temperment, enharmonic spelling communicates
important information.

And even post-Baroque European music is mostly not equal tempered! As
far as I can tell, only small ensembles involving a keyboard are. And
electronic music, of course.

By "generator" you mean a starting pitch and a way to modify it to get
the next pitch? I can see how equal temperment then only has one,
e.g. just add 100 cents repeatedly. I don't know enough about
meantone to know how that would apply, but I expect it wouldn't work
for a just scale, unless you want to say there are 7 generators :)
Post by Edward Lilley
You're completely correct about what I have "in mind" -- I'm currently
only really concerned with what I consider to be "essential" to the
music theory -- pitches, intervals and durations. This represents a bit
of early music-bias (and general arrogance) on my part, so sorry about
that!
There's another tradeoff where the more opinionated a notation is, the
more powerful and concise it is within its area, though less general.
So it's entirely appropriate to choose an area and focus on it!
Post by Edward Lilley
Post by Evan Laforge
Also, it seems like you're focusing on scales with 7 diatonic steps?
The Name type with its A-G implies that it's hardcoded that way.
That's correct. It always seems like a nice idea to say things like
"this package is not limited to Western classical music, and, in fact
only implements it as a special case" (most of the music-related
packages on Hackage seem to make some variant on that statement), and I
think such attempts are laudable, but I'm not currently claiming any
such generality -- it's not like there's a shortage of Western classical
music to play around with.
I doubt those packages actually *are*, I think they just say that :)
Post by Edward Lilley
That said, it might be nice to implement some scales from the Arabic or
Indian traditions, to demonstrate some of the more exotic tuning systems
in Tuning.hs (cf. TET17 and TET22). Also, I'm considering adding a
hexachord-based pitch representation, to more accurately write down
pre-1500 music.
Indian music is generally understood to be a just 7 tone system.
However, in practice you never hear a single tone in isolation unless
it's the 1 or 5, as they are all embellished with microtonal
variations. So as far as I can tell, the concept of intonation
doesn't really apply. For instruments with frets or sympathetic
strings, you'd probably have to go do a survey to see what people seem
to think "in tune" is, but I'd guess people wouldn't feel the need to
tune that precisely, or it would basically be 5 or 7-limit just.

Arabic and Turkish music I believe does have a concept of intonation
and a scale with a set number of degrees. Though as usual this is
probably just a theoretical approximation of a pre-existing system and
you'd need to know more about the music to know how useful it is
outside of a textbook.

With regard to pre-1500 European music (church I assume, nothing else
seems to be written down), I'd be quite interested in seeing what your
experimentation yields, I think there's probably a lot of poorly
explored territory in there. You might also try to get some nicer
sounding samples or csound patches, having a nice timbre is at least
as important as having the proper intonation.
Post by Edward Lilley
Hope that answers some of your questions! Sorry for the interminable
essay.
Not at all, thank you for the interesting discussion.

Loading...