Analysis by Composition (Music)

The logical extension to research techniques that progress through stages of capture, representation, retrieval and analysis is to then test that analysis by playing back and comparing datasets with original and comparative source materials. Bonardi states, [quote]The musicologist is at the same time a listener and a composer, since analyzing a piece of music leads to “rewriting” it.

Bonardi, A., What the Musicologist Needs, International Symposium on Music Information Retrieval, 2000
(http://ismir2000.ismir.net/papers/invites/bonardi_invite.pdf)[/quote]
The range of software on offer to composers begins to stray out of the scope of this article but like many areas of musicology, boundaries are difficult - and in some senses unnecessary - to define too closely. Software such as Max/MSP and SuperCollider are powerful and sophisticated composition tools based on the object-oriented programming environment and will inevitably involve practitioners in analysing the components and structure of both the pieces that they create and those of others. Another framework for composition, also developed by IRCAM, is OpenMusic 5 , a complete programming language that underpins the ML-Maquette application, another example of the layered and blended approach to application and function building within the music software community.

The OpenMusic environment allows for graphical representations of very complex entity relationships showing musical objects, probabilistic rules and operational elements as blocks and wireframe connections. The visual language is built on the Common LISP Object System and users are provided with a number of basic ‘classes’ and generic functions which represent musical structures such as notes, chords, sounds, break-point functions etc. The user then augments those ‘classes’ with their own defined objects and sets inheritance relationships between all the entities. Combining programming functionality with an in-built music notation editor (and instant playback features) offers the user a truly dynamic research environment where creative analysis blurs into composition in exactly the way that Bonardi describes.

Taking the theme of compositional analysis a step further, the field of algorithmic composition is a burgeoning area of activity and encapsulates many approaches to music creation. The Live Algorithms for Music (LAM) group (http://doc.gold.ac.uk/~mas01mc/LAM/) has links and is interested in multi-disciplinary collaborations that encompass work in areas such as neural networks, evolutionary algorithms and swarm theory. One specific example that indicates the nature of this type of research is composition using cellular automata (CA) techniques and Java programming. Based originally on research that focused on building visual depictions, audio CA uses the same technique of generating a new entity (or cell) based on the properties of the previous cell that was created and the status of the adjacent cells. With visual CA models, time increases as you go down the page and this largely corresponds to representations of music, so the transposition from one mode to the other is compelling and can produce complex and interesting results.

Another interesting field of research that straddles compositional and analytical approaches is the use of spectrograms (also known as sonograms) to visualise audio material (see fig. 4). These graphical views represent the frequencies of audio material and can be generated ‘in reverse’, starting with an image in order to produce audio output that corresponds to the elements on the x and y axes where x equals time and y equals frequency.

Fig. 4 Screenshot taken from Spectrogram 14Fig. 4 Screenshot taken from Spectrogram 14

This tool is a featured component in various analysis software suites but has also been used by musicians such as Aphex Twin to ‘embed’ images within the sequence of audio tracks presented on commercially produced compact discs.

Syndicate content