Transduction

Flatland, The Analogy of Dimensions and Spectral Composition

 

I've been wrestling with some language in a couple of recent posts (here and here) to make a point about the perception of sound within time.  I used the term 'time-laden' , and talked about the perceptual shift required to recognize a piece of music - more generically, a sonic event - not as a sequence of unfolding events within time, but as an architectural object that can be considered as a single, a solid.

A spectrogram  is a good visual cue.  The spectrogram shows change in frequency over time. Compared to the waveform, which shows the change in amplitude over time.  Amplitude may change over time while frequency remains constant over time.  So what remains constant if time changes?

I'm guided here by the  classic text Flatland.  A Square, habituated to life in a two dimensional world, meets a Cube.  This mysterious being can appear in the middle of closed rooms, and can speak of unnatural concepts like the 'center' of a Square.  The Cube is benevolent, and allows the Square to learn  what is known as the Argument from Analogy of Figures.  The Square repeats his lesson:

  • In One Dimension, did not a moving Point produce a Line with two terminal points?
  • In Two DImensions, did not a moving Line produce a Square with Four  terminal points?
  • In Three Dimensions, did not a moving Square produce - did not mine eye behold it - that blessed Being, a Cube, with eight terminal points?

The Cube's benevolance finds a limit, when  the Square comes up with this:

  • And in Fourr dimensions, shall not a moving Cube result in a still more divine Organization with sixteen terminal points.  

The Square is locked in jail (doodled on the back of an envelope, perhaps).  

What I have been calling Kinaesthetics, I hope will evolve into language that explains, activates, demonstrates an experience of sound that is to our casual listening what the cube is to the square. The sonic event could be a familiar piece of music.  It could be 'noise' (another word I struggle with definition).  It could be the sound of a railway engine, or the echo of landscape,

We perceive sound in linear chronology I suspect because our physiology resides in that same 'time-laden' chronology.  Fueled by metabolism, our bodies exist and consciously perceive a slice of the Present., but only the Present.  It is a Tyranny of Habituation.  But just as the Cube is a single entity viewed in slices by the 2D square, we are equally limited in our perception of objects as they unfold in time.  

Sound has a peculiar privilege in the human.  Sound impacts the physiology directly as energy waves in air.  Sound also acts as a seed object for the precipitation or accretion of meaning.  The unit of meaning is Memory - in Psychogeography I've referred to this as a Quidd.   The fundamental aspect of the universe is vibration.  Through the Principles of Sonicism (the mechanics of Transduction), sound creates a liminal interface to this vibration.

 

 


Transduction, unheard music and Dreams of a Debauched Dodo

A while ago, in 2000, I gave a paper at a conference on Art and Science of Extraterrestrial Message Composition.  The conference was co-sponsored by SETI and by Leonardo, and Doug Vakoch wrote up a nice article afterwards.

I was interested in the physiology of sound, how our shaping of sound represents our bodies.   

My brief was that if we performed analysis on a large enough sample, we could identify some generic gestures, probably based on units of basic metabolism, and use that as material to construct an outbound message to ET.  

The tool for this was the spectrogram, which I started to write about in an earlier blog on Fourier transform. The spectrogram takes 'sound' out of 'music', presents the material as an artifact outside of culture.  

“A Waveform in one medium holds meaning in second.  The Paris paper started to suggest the idea of sound as an architectural object.  We could take all these sonic objects ("pieces of music"), create a morphology of generic structures, then transduce those to the electromagnetic spectrum.

(Elsewhere, I've commented on limitations of radio astronomy as the format for ET communication.)

The key to transduction is a shared measurement of Frequency.  Everything that unfolds in time can be measured with a frequency.   If something happens 10,000 times per second (10,000 Hertz)  in air, it can happen 10,000 times in electromagnetism.  Or in water.  Or soil.  Or if it happens once every 1017 seconds

To reiterate: when I say 'music', here I am not referring to cultural styles.  I am referring to the conscious manipulation of pattern as represented by a waveform that can be applied to any medium.

SCAN0045This handwritten graph sketches out the idea.  At the center is a linear axis, exponential values representing frequency from Zero to Infinity.

1 cycle per second, (1Hz)  is the human heartbeat.

The chart breaks down into bands of activity, representing  different sources of data, different medium.  I've been talking about Electromagnetism and the Audible Spectrum, both shown here:  but I also wanted to include biological, seismic, galactic time scales.  As the chart is developed I expect to include other animal characteristics such as whale song and insect noise.   I show  diurnal activity (ocean tides), the solar calendar and then a jump up to the current age of the universe. I could also draw a block that represents the frequency of the martian orbit, the rotation of the Milky Way, the Age of Aquarius or the frequency of a human life.

The graph represents transduction as a jump between bands - for example from audible to electromagnetic, or seismic to sonic.  This is more than just manipulating pitch to represent data.  There is a 3 dimensional mathematical artifact that is moving between media.

If we take the frequencies of Visible light (let's say around  1015 ) and Transduce that to Audible, we are first changing the mechanism generating the energy so that now energy moves through.   But we also need to consider Transposition.  1015  if played on an instrument ( a hell of an instrument) is far beyond the range of human hearing. Although sonoluminescence is an intriguing possibility.  But for us to 'hear' the pattern created by light, the number needs to be divided down so that it is more like 103 .

The math is interesting, because the fundamental relationships between frequencies are ratios rather than exact measurements.  The energy required to double the frequency is always the same.  To move from 440Hz to 880Hz requires the same energy as moving from 44000hz to 88000Hz.  Higher frequencies have a lower energy distribution.

Back to Paris for a moment.   The output of Fast Fourier Transform (FFT) is a series of time-sequenced frequency arrays, which suggested to me that using some kind of algorithmic analysis (maybe humdrum?) I could look for shared sequences across a wide body of recorded samples.  Those sequences could occur in any of the measurable vectors - beginning with pitch, but also looking for patterns in the overtones, shared sequences of amplitude changes. 

(I've also thought that this could be a useful approach to whale song.  The analysis of whale song I've seen has assumed that change in pitch generates a bit of information - but if we were to apply that method to the Hildegaard sample I presented in Paris, we'd miss all the meaning.  I would propose an analysis of whale song that considers the periods of silence as a signifcator, or looks for patterns of changing amplitude at certain subsonic frequencies.)

The paper was really more a musing on the subject.  I also played a recording of a composition.  I was starting to think about composition in these generic terms.  Along with a didgeridoo, I used electronics to create other drones, pulses, nothing you would call a beat - repeated surges and upwellings.  We were fortunate to be hosted in the Malina family home:  Frank Malina was an engineer at NASA  and an artist.  Many of his pieces were painting overlapping lights with different levels that moved, rotated to create complex sequence of colour.  Roger Malina commented on how the aesthetic of the music merged well with his father's paintings.

The piece had been called Dream of a Debauched Dodo, I should dig up a cassette from the archive.  At the gentle insistence from Doug - who understood the audience better than I did -  I temporarily used the title Music for Didgeridoo and Electronics.

 

 



Fourier, Transduction and Alien Composers.

WhaleEye

One of the principles of Sonicism is:

“A Waveform in one medium holds meaning in second”

A wave is the movement of energy through a medium.  We are most familiar with 'sound', which is the motion of energy as waves in air.  A wave can be analyzed mathematically through a process called Fourier analysis.  Those results can be visually represented in a spectrum.  A spectrum is an analytical artifact, a visual representation of the results of Fourier analysis.  It is also a tool that allows for intuitive reckoning with the structure of the wave. 

This is a spectral analysis taken from a fragment of whale song. 

WhaleSpectrum



I use Sonic Visualiser, because it is awesome and because it is free. I remember seeing pitch analysis of whale song as an undergrad in a music theory lecture. Later reading of Robert Cogan showed how spectral analysis could be used for musical thinking.

The Y-axis of a spectrum (whether music or any other wave) represents frequency - think of it as the keys on a piano, with 'up' being 'higher'.   The width of these buckets is an important consideration.  On a piano, each key represents a half step.  There are obviously an infinite degree of smaller steps in between each recognized pitch, and the same is true of the frequency buckets used for Fourier analysis.

[Now is not the time to get in to details about the well-tempered systems, plural, used on the piano over the past 4 centuries.  Suffice to say that the above statement referring to equal half steps is incorrect by ommission.  ]

So we define the granularity of the frequencies that will be represented in the analysis.  A standard default is 1026 across the audible spectrum, roughly considered 20 - 20,000 hz.  That math is beyond me, but it means that some equal distribution of frequency is assumed.

The X-axis represents time.  Just as we broke down the frequency range into buckets, so we break time into discrete windows.  Each window of time captures the frequencies present in each bucket at that instant.  From this information, our analytical tool paints a pretty picture.

Fourier analysis is based on the insight that a complex wave can be represented as the sum of many sine waves.  A complex form is revealed as a series of discrete waves, each represented at a particular frequency.  The width of the Y-axis frequency buckets in the spectrum  determines how precise we can be in isolating each discrete wave.  If those waves were added back together, the peaks and troughs would enhance or cancel each other, and the original complex waveform would be re-calculated.

Fourier analysis assumes that the wave is unchanging after the first period. Music obviously *does* change over time, as do most complex wave patterns. Fast Fourier Transform (FFT) performs Fourier analysis on each window of time, and then sequences those slices of time together. 

The results of Fourier analysis can be reversed to generate the original sound. They can also be manipulated before engineering the sound.  Visual forms can be constructed to create new waveforms.This is a particularly rich source of compositional inspiration.

Back to Sonicism. It matters not if the wave under analysis is taken from sound (energy vibrating in air) , from ocean (energy vibrating in water) from the sun (energy vibrating in the electromagnetic spectrum). If the wave can be represented mathematically, it can be engineered in any other medium.

When NASA releases ‘The Sounds of Jupiter”, we should not imagine a giant booming noise emanating across the solar system. Well, we should imagine that, because that would be cool. But what has really happened is a representation of the electromagnetic wave as a sound wave. This process, moving waves across medium, is called Transduction.

There’s a bit of coinage going on there, I use the term very loosely. A very familiar transducer is the mechanism in the human ear that takes energy waves in air and reforms them as electrical impulses in the human brain. The HiFi speaker does the same thing, taking electrical impulses from the piezo mechanism on the LP needle and groove, transferring them into vibrations on the speaker cone which then activate energy in the air.

If we were to take the direct values of the electromagnetism of the sun and represent directly as sound, it would be outside the audible spectrum. The values need to be mathematically modeled so that they appear within the desired range of the target medium. This process, we call Transposition ( a solid musical term). The maths are complex because an octave is exponential not linear: always twice the frequency. So 800 is an octave above 400 (difference of 400). 8000 is an octave above 4000 (difference of 4000).

And the octave is a relationship worth preserving in any medium.

Transduction and Transposition gives the artist a powerful set of concepts and tools.  I write ‘music’ that is ultimately intended for electromagnetic ‘performance’. It is not ‘heard’.   It is not really experienced . The musical forms exist only within the electromagnetic spectrum.  (See notes on Voices of the Noosphere, for radio telescope and didgeridoo as well as Keynote Address on Music as Cultural Dialogue, which I presented at UNESCO in 2010

What if we create waveforms that would be 'played' within the Earth, a controlled seismic event?  Or establish and control standing waves on the surface of a lake - like an enormous chinese water bowl.  Light patterns that are the direct corrollary to a piece of music - not just a laser show, but the actual transposition of sound to light?  The process is like some geomantic matrix, rich in the capability to psychogeographically manipulate the environment/noosphere. 

I imagine this is how we will identify ET culture. Their ‘instruments’ will be galactic, manipulation of gravity waves, structures of electromagnetism, formulation of time.