One of the principles of Sonicism is:
“A Waveform in one medium holds meaning in second”
A wave is the movement of energy through a medium. We are most familiar with 'sound', which is the motion of energy as waves in air. A wave can be analyzed mathematically through a process called Fourier analysis. Those results can be visually represented in a spectrum. A spectrum is an analytical artifact, a visual representation of the results of Fourier analysis. It is also a tool that allows for intuitive reckoning with the structure of the wave.
This is a spectral analysis taken from a fragment of whale song.
I use Sonic Visualiser, because it is awesome and because it is free. I remember seeing pitch analysis of whale song as an undergrad in a music theory lecture. Later reading of Robert Cogan showed how spectral analysis could be used for musical thinking.
The Y-axis of a spectrum (whether music or any other wave) represents frequency - think of it as the keys on a piano, with 'up' being 'higher'. The width of these buckets is an important consideration. On a piano, each key represents a half step. There are obviously an infinite degree of smaller steps in between each recognized pitch, and the same is true of the frequency buckets used for Fourier analysis.
[Now is not the time to get in to details about the well-tempered systems, plural, used on the piano over the past 4 centuries. Suffice to say that the above statement referring to equal half steps is incorrect by ommission. ]
So we define the granularity of the frequencies that will be represented in the analysis. A standard default is 1026 across the audible spectrum, roughly considered 20 - 20,000 hz. That math is beyond me, but it means that some equal distribution of frequency is assumed.
The X-axis represents time. Just as we broke down the frequency range into buckets, so we break time into discrete windows. Each window of time captures the frequencies present in each bucket at that instant. From this information, our analytical tool paints a pretty picture.
Fourier analysis is based on the insight that a complex wave can be represented as the sum of many sine waves. A complex form is revealed as a series of discrete waves, each represented at a particular frequency. The width of the Y-axis frequency buckets in the spectrum determines how precise we can be in isolating each discrete wave. If those waves were added back together, the peaks and troughs would enhance or cancel each other, and the original complex waveform would be re-calculated.
Fourier analysis assumes that the wave is unchanging after the first period. Music obviously *does* change over time, as do most complex wave patterns. Fast Fourier Transform (FFT) performs Fourier analysis on each window of time, and then sequences those slices of time together.
The results of Fourier analysis can be reversed to generate the original sound. They can also be manipulated before engineering the sound. Visual forms can be constructed to create new waveforms.This is a particularly rich source of compositional inspiration.
Back to Sonicism. It matters not if the wave under analysis is taken from sound (energy vibrating in air) , from ocean (energy vibrating in water) from the sun (energy vibrating in the electromagnetic spectrum). If the wave can be represented mathematically, it can be engineered in any other medium.
When NASA releases ‘The Sounds of Jupiter”, we should not imagine a giant booming noise emanating across the solar system. Well, we should imagine that, because that would be cool. But what has really happened is a representation of the electromagnetic wave as a sound wave. This process, moving waves across medium, is called Transduction.
There’s a bit of coinage going on there, I use the term very loosely. A very familiar transducer is the mechanism in the human ear that takes energy waves in air and reforms them as electrical impulses in the human brain. The HiFi speaker does the same thing, taking electrical impulses from the piezo mechanism on the LP needle and groove, transferring them into vibrations on the speaker cone which then activate energy in the air.
If we were to take the direct values of the electromagnetism of the sun and represent directly as sound, it would be outside the audible spectrum. The values need to be mathematically modeled so that they appear within the desired range of the target medium. This process, we call Transposition ( a solid musical term). The maths are complex because an octave is exponential not linear: always twice the frequency. So 800 is an octave above 400 (difference of 400). 8000 is an octave above 4000 (difference of 4000).
And the octave is a relationship worth preserving in any medium.
Transduction and Transposition gives the artist a powerful set of concepts and tools. I write ‘music’ that is ultimately intended for electromagnetic ‘performance’. It is not ‘heard’. It is not really experienced . The musical forms exist only within the electromagnetic spectrum. (See notes on Voices of the Noosphere, for radio telescope and didgeridoo as well as Keynote Address on Music as Cultural Dialogue, which I presented at UNESCO in 2010
What if we create waveforms that would be 'played' within the Earth, a controlled seismic event? Or establish and control standing waves on the surface of a lake - like an enormous chinese water bowl. Light patterns that are the direct corrollary to a piece of music - not just a laser show, but the actual transposition of sound to light? The process is like some geomantic matrix, rich in the capability to psychogeographically manipulate the environment/noosphere.
I imagine this is how we will identify ET culture. Their ‘instruments’ will be galactic, manipulation of gravity waves, structures of electromagnetism, formulation of time.