top of page

Composing With Crystallography


DOI: https://doi.org/10.33008/IJCMR.2019.06 | Issue 1 | March 2019

Author: Owen Lloyd, Royal Welsh College of Music and Drama


Abstract

In 2016, I started a collaborative project with Architects and Crystallographers from Cardiff University to explore creative opportunities within crystallographic data-sets and modelling algorithms. This composition research developed and extended their Leverhulme-funded research which had already commenced. Outputs include music, generative compositions, composition tools, audio-visual work and an installation in Berlin.

Research Statement

This ‘route-map’ discusses the processes behind a group of compositions, Incomm1 through to Incomm10, and two generative works, Berlin Incommensurate and Incommensurate Visualisations, that arose from distinct phases in a body of composition research. This research extends a larger, Leverhulme-funded research project bringing together architects and crystallographers to investigate the creative possibilities available within crystallographic data and modelling algorithms. Architecture and crystallography sit at opposite extremes of scale within applied research. Crystallography explores the relationships between crystalline materials and their structures and geometries at the atomic scale. Architects Sergio Pineda and Mallika Arora, and crystallographers Kenneth D. M. Harris, Benson M. Kariuki and P. Andrew Williams, from Cardiff University, have come together to explore how properties within these molecular geometries can translate when scaled up to the dimensions of spatial or material design. In order to facilitate this, an application has been coded that allows creative practitioners to access crystallographic data within CAD software (Pineda et al, 2016).


Further to this design research, this statement sets out a body of composition research investigating how crystallographic modelling algorithms can be used to make sound and music. Rather than exploring the symmetrical geometries of the architect’s research, I based this project on a model for describing the asymmetrical dimensional relationships within crystalline materials that replicate with non-repeating structures. To investigate these incommensurate structures, I made an application which sonifies them through the creation of scales, time intervals and synthesis processes as well as using them to control a host of other musical parameters. This application forms the primary toolset for the research and its development and modulation, as well as works that I came up with throughout the process, are described below.


Data Sonification


Functionally, my focus is on sonification as creative practice, and this research in no way claims to elucidate the data output in any practically scientific way. Data sonification has its roots in scientific practice but is now well established within the arts. Practitioners working with sound and music have long explored the structural and timbral potential within extra-musical data sets: John Cage’s Reunion (1968) drew on the structures inherent within a game of chess to make its music, and his Atlas Eclipticalis (1961-62) superimposed a star map onto music staves to generate its events. The term sonification provides a useful lens through which to view a number of works that existed before it was established. Works such as Music on a Long Thin Wire (1977), and the two Cage pieces mentioned above, can all be usefully discussed in terms of their sonifications. Music on a Long Thin Wire, for example, offers sound material directly linked to phenomena outside of music, and examining it as sonification brings about a causal mode of listening seemingly at odds with the abstract nature of its sound world. Within the scope of this paper, however, sonification is used in its relatively traditional sense, with a numerical dataset being rendered as sound.

Incommensurate Structures in Crystallography


Williams has been my main point of contact with the crystallographers, and his patience with my stumbling understanding of crystallography, allied to his considerable coding skills, have been instrumental to my ability to conduct my portion of the research. Williams describes incommensurate structures in crystallography as follows:


Conventional crystalline materials are those in which the component atoms/molecules are arranged with long range translational order in three dimensions: the entire material is constructed from a well-defined structural unit that replicates in three-dimensional space. They have three translational periodicities. For some materials, on the other hand, the arrangement of the component atoms in three-dimensional space can be described only by invoking four or more translational periodicities. Such materials are described as “incommensurate”. Although existing in three-dimensional physical space, the material is now no longer periodic in three dimensions but is periodic in four or higher dimensions. As such its appearance is not that of a conventional, ordered crystalline material. The three-dimensional structure of the incommensurate material represents a three-dimensional slice through a higher dimensional superspace.

He then describes the model upon which the compositions are based:


The idea of an incommensurate material can be represented by creating a series of sine waves, arranged in a periodic fashion in two dimensions with the start point of each sine wave offset along a diagonal. This represents a two-dimensional structure. Drawing a line through the set of sine waves creates a one-dimensional slice through the two-dimensional structure. The points of intersection of the line and the sine wave will be incommensurate and the values of the distance between them will never be found to repeat exactly (Fig.1).

This model was suggested and outlined by Harris and then coded by Williams. It was written in Python and then runs within an application, coded by myself, in Max.



Owen Lloyd
FIGURE 1

The Data


The fundamental mathematical process behind the research exists as a program written in Python and nested within Max using the py/pyext Max externals [1]. These run a Python script within Max, outputting the results to the patch for further use. The form that these results take are, initially, 100 floating point values that describe the distances from zero of the points of intersection in the model described above, scaled between 0 and 1. The list of values is processed in different ways and then stored in coll objects for later use.


The first process performs a logarithmic function on the data, and then multiplies them all by 20000. This creates a spread of values that sits within a useful range when accounting for the logarithmic relationship between octaves and creates the scale by changing the range of the values from 0 to 1 to a range between 0 and 20000 hertz. This does generate frequencies outside the range of most human hearing, but it avoids the arbitrary imposition of a ceiling, or floor, of hertz values, enabling an individual’s hearing to dictate the range: the frequencies are available for those who can experience them.

Scales


The first musical structures explored through the data were scales. This was done by creating a simple mixer that realised the scales generated through the Python function in the form of one hundred sine wave oscillators that could have their volumes modulated by sliders. A modulatable lag was added to the array of sliders which allowed fading in and out of sustained tones. In this way the early stages of the research could focus on the frequencies available in each scale, and how they interacted with each other. In addition, a visualisation of the frequency spread was also coded, giving a very quick indication of the structure of the scale and quickly leading to an interesting realisation concerning the spread of values. What emerged was that, depending on the variables input into the code, the output was stepped to a greater or lesser extent. This led to interesting scale structures where very tight clusters of frequencies were punctuated by large steps (Fig.2). The acoustic implications of this were that a scale could be found where a dominant characteristic was its modulation of beat frequencies.



Owen Lloyd
FIGURE 2

Synthesis Methods


A number of synthesis processes are used in the application, based, in the main, on two wavetable oscillators. In order to create these a larger dataset is needed in order to avoid an oscillator with a sample length so short that it produces simply the sound of its own aliasing. An oscillator of 512 samples is created by adjusting the original Python script to allow for a larger output. This is then used to create a modulated sine wave oscillator, the result of the values having a sine function performed on them. A second oscillator is made by scaling the intervals between the intersections – rather than the accumulating values of the scale – to a set of values between -1 and 1 to create a wavetable based very purely on the data set. These oscillators are then used to make an FM oscillator that refers to the scale to determine the frequencies available for the carrier and modulator waves. An FM index value between 0 and 1 is determined by the raw output from the Python script. Additive synthesis is also present. 10 data points from the Python algorithm, taken at index intervals of 10 – the value at index 0, index 9, index 19 etc. – and scaled between 1 and 10, are used to create a set of ratios to multiply a fundamental frequency by. This results in an additive oscillator with 10 partials whose frequencies are determined by the data, and are representative of the overall shape of the scale.


Sequencing


The next step in the research phase was to code a series of sequencers. These take the form of four discrete sequencers with multislider interfaces. There are two multisliders for time values and a corresponding pair for pitch values. The time sequencers are scalable to a range within a controllable maximum duration providing an opportunity to explore micro timings within the dataset, as well as longer form musical structures. To shape the notes, amplitude envelopes use the dataset to determine attack and decay values. All sequencers are also independently scalable in terms of their number of events, and the duration sequencers also have a note off function to create rests of a length determined by the data.


Each of these pairs of sequencers focus on different modes of synthesis from the processes described above. One pair uses the FM oscillator and the other offers a choice between the modulated sine wave, the wave created from the intervals, and the additive oscillator. These, in turn, are then sent through a low pass filter with cutoff frequencies, again determined by the scale.


Incommensurate Compositions


With these tools in place, the project moved on to a period of compositional research using the application (Fig.3). Initially this compositional phase fed into the development of the application, with features added to the toolset as gaps in functionality were identified and opportunities explored. As the toolset became more established and a final format decided upon, the composition phase became more aesthetically driven as sets of values were input to the Python code and the results investigated more fully in terms of pitch and time structures, and timbre. These Incommensurate Compositions came about through a process of improvisation with the application. Outputs from these improvisations were recorded and combined in editing software resulting in a series of works that bear both the imprint of the data and my own compositional voice. There were times when I wondered if the works would ever be free of the very high frequencies that populate them and I toyed with the idea of massaging the data to shift them down. But as I worked further it became clear that these frequencies were characteristic of particular scales, and that other scales were available to me that had spreads of much lower frequencies added to plateaued clusters of higher tones. Timbrally the works tend towards metallic tones, this is certainly a function of the FM and additive synthesis models that drive much of the sound, but the wavetable oscillators used within these processes seem to add harmonics which reinforce this aesthetic. Here are a selection of works from this phase of the research:




Owen Lloyd
FIGURE 3

Berlin Incommensurate


In January 2018, Berlin Incommensurate was exhibited in the gallery at Acud Macht Neu in Berlin as part of Transmediale Vorspiel 2018 (Fig.4). During the development of the work, questions around the tricky relationship between time-based artwork – with implicit start and end points combined with a sequence of events – and a gallery space – without an expectation of timed entry and exit points – led to the decision to make a generative, installed composition. This allowed the structures within the data to operate over a much longer timescale, with time values that had dictated gesture at the scale of fractions of a second expanding to govern the overall structure of a far longer composition.



FIGURE 4


The project started with a streamlining process aimed at optimising a new version of the Max application for live, generative composition and performance. The composition environment that contains the nested Python algorithm is not agile when changes to the parameters of the python model are made; there can be appreciable lag as new values are calculated. To mitigate this twenty scales from the code were selected and stored in text files to be accessed far more rapidly by the coll object in Max. Another change limited the app to one sequencer, a decision taken in-part for aesthetic reasons as the two sequencers often run asynchronously, resulting in chaotic material needing aesthetic management in the moment. This is not a problem with a human in control but is far more challenging to manage algorithmically. Finally the synthesis engine was simplified, with the sustained pitch mixer using just pure sine waves and the sequenced material using only the FM synthesis algorithm. To balance this simplification, a layer of granular synthesis was added, with its controlling parameters governed by values from the data set. This granular engine was set to recycle the last minute of the output of the rest of the synthesis engine, allowing sound that had passed to linger, and material that had been generated in one scale to be transformed by settings from another.


This streamlined set of processes is put to work within a compositional environment dictated by the creation and controlled decay of ordered, sequenced structures, combined with the accumulation, and sudden silencing, of sustained tones and their associated beat frequencies. These two cycles unfold asynchronously, providing a central spine for Berlin Incommensurate. To bring about the decaying sequences a reset event is triggered. This sets parameters in the time and pitch sequencers to a single value from the data. These sequencers then play notes made in the FM synthesis engine with new modulation, index and envelope parameters derived from the data. As the composition unfolds, according to time values derived from the dataset, a modulation event is triggered. This picks a single time event and a single pitch event in each sequencer and changes them to new values from the data. As this process unfolds at a faster rate than the reset cycle, the sequence can degrade over time from its orderly beginning to a more clumsy, staggering structure. Then the next reset event is triggered and the cycle begins anew, different every time. Sitting behind these discrete note events is a ground of sustained sine waves with its own reset and modulation cycles. Here the reset event creates relative silence, as all one hundred frequencies are cut off at once. The shorter modulation cycle adds a random frequency from the data, at a random volume from the data. This builds up over time to a thick, often beating, texture that, aesthetically, is closely aligned with additive synthesis.


Allied to these cycles are other parameter changes, the most forceful of which is a shift to another of the twenty scales available. This affects everything, from pitches to time values, from spectral content to synthesis parameters, making it a very strongly voiced variable in the work. In order to promote variety and longer term change the scales available range widely. Some instances have a relatively smooth distribution where others have an unevenly stepped range of values. A few scales cluster together at very low values, making their sections faster moving and their pitch characteristics more focussed on beat frequencies (Fig.5). Other parameters that changed with either the reset or modulation events were reverb, granulation and equalisation settings. Here you can find a thirty minute, unedited slice from the application’s output:





Owen Lloyd
FIGURE 5

Incommensurate Visualisations


Since the installation in Berlin the application is being developed in different directions in attempts to link the model with other disciplines and sound environments. The first iteration uses the model to create visual material, both in accompaniment to the generative work, and as a standalone route to visual outcomes. A module has been added to the generative work that produces a sequence of coloured lines in red, green and blue which scan across the screen fading to white as they reach the time position of the next reset event. Red lines appear horizontally and vertically when modulation events occur. This provides simple visual feedback about the imminent structure of the work, and also grants access to the way in which it is made up of discrete, overlapping, processes with shifting phase relationships (Fig.6). Here is a section of video to demonstrate this visual feedback:




Owen Lloyd
FIGURE 6

In a further separate development, away from any thought for sound, I made a version of the application to simply visualise the twenty scales from the generative work. This started as a simple patch that distributes vertical lines across a screen, giving them x coordinates according to some of the scales. Screenshots of these were then taken and layered together in image editing software (Fig.7). This process has since been taken further and is ongoing. At present the software assigns colours to the lines and layers them across the screen. Over time they build up, mixing colours and forming patterns based on the data (Fig.8).

Owen Lloyd
FIGURE 7

Owen Lloyd
FIGURE 8

A Link to a Modular Environment


The most recent development in this research has been to create a procedural link between the dataset and an analog synthesis environment in the form of a eurorack modular synthesiser (Fig.9). Here, the application generates control voltage and gate signals that can be routed to the outputs of an eight output, DC coupled, audio interface. In this case the application is set up for an Expert Sleepers ES-8 audio interface module but the patch is customisable and could be set up for any interface. This application is one of the foci of my general composition practice now, giving me the dramatic changes that the scale changes afford, allied to material generated outside of the dataset. Here is an example of this integrated approach to the work:





Owen Lloyd
FIGURE 9

Reflection


This research furnishes us with a suite of software environments for sonic and visual practice that place a crystallographic theory at their heart. Of most use to practitioners other than myself are the two composition applications. One of these is an environment allowing exploration of all available scales from an algorithm based on a model for describing incommensurate crystalline structures. The second takes a selection of these scales and makes them available for the generation of control data that interfaces with a modular synthesiser through a DC coupled audio interface. Both of these applications provide procedural connections between creative practice and science, inviting data from science to both define and further colour sound and composition. Importantly both are open ended. With a knowledge of Max, both programs could be re-configured to suit different projects. In fact, a further aim is to write an application that is much more open, outputting scaleable values, as lists or over time, in order to make them accessible in forms that don’t imply any particular outcome.


Reflecting more broadly I find that working in detail with this dataset uncovers clear differences between these incommensurate structures and noise based random structures. This is particularly interesting for my composition process as it changes the structures within my, often indeterminate, work in fundamental ways. The results still sit within indeterminate norms but now have a stepped underlying structure that provides more order than when using noise as a basis. This leads to the clustering of data events described above, which, when used to determine musical properties, results in idiosyncratic time and pitch outcomes. Switching between these structures in real time within a composition makes for compelling punctuation based not on dynamics or a single key change, tools that I might reach for outside this environment, but on the reconfiguration of every musical process at play.

Looking forward within my own work, these data structures are still generating new ideas and materials. I am developing a more powerful and modulateable version of the additive synthesis engine as its own application, again capable of linking to modular synthesis environment. I am in talks with film-makers, designers, visual artists and choreographers in order to plug the data into new processes within interdisciplinary situations. The link to architecture is also still ongoing and I hope to be able to combine these data structures with geometries developed by Pineda through the project, but on a far larger scale, in order to realise agile built environments with an embedded incommensurate music.


References

  • Hermann, T, Hunt, A and Neuhoff, J G (2011) The Sonification Handbook. Berlin, Logos.

  • Pineda, S et al (2016) The Grammar of Crystallographic Expression. Presented at: Acadia 2016, Ann Arbor, Michigan, USA, 27-29 October.


Notes


[1] These can be found at: https://grrrr.org/research/software/py/.

Comments


Commenting has been turned off.
bottom of page