Inaugural Special Issue Vol 1, No. 1 (Dec 12, 2012)

Thresholds and Fragile States

Citation

Thresholds and Fragile States

Citation


Dunn, David. "Thresholds and Fragile States." Moebius Journal 1 (2012): 1. Accessed Aug 21, 2014. http://moebiusjournal.org/pubs/8.

Important: Before using this citation in formal writing, please consult the Chicago Manual of Style.

David Dunn
2010-2011 

1) Introduction

Several years ago I spent time living on a houseboat deep into the Atchafalaya Basin of Louisiana. Nights were spent making forays into the swamp to record continuous night sounds without interruption. One of the most striking features of this sound world was the abrupt transition between distinct collectives of sound makers. One group would hold center stage for hours and then suddenly fade to silence. Within minutes a whole new cast of sonic actors replaced them. The dynamic quality of these dense soundscapes, with their fantastic spatial motion, impressed upon me a sense that—beyond the communicative agenda of individual living sound generators—there was some underlying emergent logic at work to drive them into a global patterning. It was as if there were multiple chains of communication linking a fractally-delineated field of interlaced non-linear sources. These communicative chains not only extended outward in all directions but also up and down levels within a potentially infinite array of organizational hierarchies.

No scientific study of such global interactions and feedback between diverse sound makers within an ecological network has ever been attempted. This project is a step towards modeling the assumption of such an underlying emergent dynamical force. The circuits herein described give rise to autonomous sound behaviors that aestheticize mathematical pattern formation and are also a tool for the exploration of dynamics that help weave sounds together in the natural world. In many ways this work is an attempt at understanding pattern formation in natural sound systems. Comparisons and interactions between these natural and artificial systems might shed light on how similar dynamical properties might be operating at their generative levels.

As an artwork, these autonomous circuits are intended to either stand-alone as a conceptual entity that embodies dynamical behaviors or as a system of closure that can be structurally coupled to its surrounding auditory world. Under such circumstances, their behavior can be perturbed and constrained by the environment’s behavior but should also be understood as a metaphoric machine expression of the autonomy of the living rather than as merely information processing devices.

The autonomous audio devices articulate an underlying assumption of biological autonomy through a basic design implementation. Two identical analog feedback circuits exist as closed autonomous unities that can be structurally coupled through the simple connection of a shared resistance network. After initial conditions are established through the setting of a few potentiometers, the coupled circuits are allowed to behave autonomously in a self-organizing manner. Any sounds produced by the circuits emerge as a type of “conversation” that is allowed to continuously drift through novel behavioral domains that exhibit repetitive action at a local level but tremendous global diversity over extended time periods. In this sense the circuits resemble the closed nervous systems of living unities that are under constant perturbation from other similar closed nervous systems. The intention is not to simulate the high level functioning of biological organisms and their cognitive capacities but rather to take this question down to its most primary level of autonomous-closure machines where self-organization is more obviously inseparable from behavior.

 

 

Figure 1.

There are several famous examples of autonomous analog computation within the history of electroacoustic music practice and most likely many more instances that exceed the well known by an exponential factor. The various forays into this area of exploration can largely be divided into two categories: 1) those that were derived by the “patching” of recursive non-linear feedback networks within commercially available analog synthesis instruments (Moog, Buchla, Arp, Serge, etc.) by the 1970s—specialized audio versions of comparable analog computers used by the aerospace industry and the early investigators of computational modeling of non-linear dynamics—and 2) custom designed circuits or ad hoc recursive circuit constructions made from interlacing independent function generators, filters, mixers, tape-loop feedback, etc. In some ways this is a wonderful example of how similar human knowledge can be constructed from very different sets of cultural assumptions and approaches to phenomenal explanation. In one instance, dynamicists were creating these kinds of systems in order to study their formal properties, while musicians were using similar tools to explore the perceptual attributes (sound) of the same physical phenomena. Just a few of the musicians who have explored conceptual terrain that borders upon these issues have been: Warren Burt, Sal Martirano, Richard Maxfield, Pauline Oliveros, and David Tudor. 

One of the striking characteristics of experimental music traditions of the last few decades has been a concerted effort to invent and implement generative strategies for musical composition in the sense that the composer is primarily responsible for the generative system’s global design rather than the primary decision-maker for the specification of constituent details. The familiar “top-down” model of the composer as the organizer of notes on a page that signify meaningful sonic events is replaced by a “bottom-up” model of the composer as system’s designer who is responsible for the organization of a generative mechanism from which the details for sonic events can emerge. While the majority of such explorations have been concentrated within the dominant use of digital algorithmic tools and materials, my use of the term “generative mechanism” must also be inclusive of other means and resources that are not constrained by technological innovation per se. This is also true for what has now become a minority area of music technology investigation: the unique qualities and attributes of analog circuit design. Most such research has by now become merely prosaic in the sense that it largely concentrates upon improvement in the design of basic tools—amplification, mixing, and signal routing tasks—where optimization of their utility is of value. Innovation into questions of structural form, sound synthesis and generation, signal processing, spatialization, and complex organization are almost exclusively the domain of digital music research (see Lansky1, The Importance of Being Digital, for an explication of this idea). Assuming the validity of the observation that digital systems seem vastly superior for the serious investigation of these musical frontiers, why would I pursue my investigation using what appear to be—by comparison with “state-of-the-art” resources—largely archaic tools?

The distinction between analog and digital music systems harkens back to one of the most enduring debates of Western philosophy. The nature of digital code requires a level of specificity where the mathematics is, in some sense, a purer model of what we understand about the world but in its most abstract terms. It is a Platonic world where we create an experiential manifestation from the reductionist archetype that must be exact in its numeric representation. Analog circuits can only be manifest as physical entities that are prone to a range of variations dependent upon the distinct properties of their constituent parts. We can describe them mathematically and infer their abstract state as systems but their unique ontological status is as imperfect physical examples of their otherwise ideal potential. They exist as messier Aristotelian things from which we can extrapolate abstract principles. 

Obviously I have posed this distinction between digital and analog systems metaphorically for overt effect. The truth is that either type of system exists in a kind of “chicken or egg” condition that is dependent upon our temporal relationship to how and when we create or explain them. My point here is that we do experience them differently and those differences inform how we use them: with analog circuits it is easier to propagate and sustain meaningful “mistakes” that might lead to novel insights. In the particular instance of the project herein described, my choice has been constrained by the nature of the questions that I am asking. Is it possible to create machine-generated sound behaviors resembling primitive conditions of biological autonomy, and can such machines be physically manifest in their imperfections akin to those of living things? 

2) Modeling Biological Autonomy Through Sound  

One of the most enduring conundrums in science has been the question of how to define life. There has never been a truly satisfying description that can account for the transition of non-life to life. Its origins remain a mystery and the tentative status of phenomena such as viruses only further complicates the issue. The reduction of life to certain outstanding properties such as the ability to reproduce seems arbitrary since many individual living systems cannot. While we certainly know a great deal about many of such properties (biochemistry and genetics), and have assembled a vast knowledge base about the specifics of many living organisms (taxonomy and behavior), we still cannot easily answer the question: what is life? 

An enduring contribution to this issue was the elegant book by physicist Erwin Schrödinger whose title consists of this very same question (What Is Life?).2 Towards seeking an answer, Schrödinger poses two ideas that have subsequently become essential to science. One was the concept of an “aperiodic crystal” as carrier of genetic information. The other was the principle of “order-from-disorder.” The first idea influenced the discovery of the double-helix molecule of DNA by Crick and Watson and the subsequent revolution of molecular biology, while the second is a fundamental notion in the formulation of such concepts as “dissipative systems” and “negentropy” that are at the heart of complexity science. While Schrödinger poses no definitive answers, he does contextualize the question in an essential 20th century manner by asserting that classical physics is simply incapable of resolving the contradictions inherent in the stability of molecular structures that are necessary to sustain a living organism. 

More recently the biologist Lynn Margulis has posed the need for a more global vision when seeking answers to this question in another book of the same title. 3 Without eschewing the need to understand essential mechanisms in nature, she argues for a non-reductionist approach to the question by asserting that life is matter that sometimes is capable of making decisions which not only impact its environment but also its own evolution. 

Another viewpoint is found through a nexus of biology and neuroscience and it is this perspective that largely informs the project herein represented. The theory of autopoiesis was originally framed by Francisco Varela, Humberto Maturana, and Ricardo Uribe in 19744 and systematically summarized by Gail Fleischaker.5 The criteria for defining autopoiesis are specifically meant to apply to the organization of living systems as they are constrained by the laws of thermodynamics. They also support the claim by Maturana, Varela, and Uribe that autopoiesis is an explicit mechanism of identity that characterizes the organization of a living thing and also characterizes the transition of non-life to life. The molecular and organellar components specified by autopoiesis determine all the necessary and sufficient interactions to account for the cell membrane as a system-logical boundary that is determined by its own internal mechanisms. 

It is precisely this property of a self-organizing autonomy that gives special status to living systems and characterizes the theory of autopoiesis. Living systems are defined by their organization rather than by their specific material constituents. Over time, living systems establish their organization through creating and replacing their own components. While the components are subject to change, it is the organization that remains as a stable identity and invariant property of the system as a whole. 

Biology has traditionally framed the study of living systems in representationist terms that stress the interactions and behaviors of a system with its environment through description of their mutual correspondences. The formal language of cybernetics has also couched the description of various dynamical systems in similar terms built upon Information Theory and homeostatic feedback mechanisms that—while largely self-regulating—are understood as allopoietic systems, meaning that they have as their product something different from themselves. Such input/output-based descriptions are highly appropriate for many forms of phenomena and exist in a complementary relationship to the autopoietic organization of living systems defined through their operational closure.  While highly useful in understanding certain network relationships that constrain the perturbations between living systems, allopoietic descriptions tend to ignore certain features of life such as its essential condition of autonomy. 

One of the behavioral domains in which this autonomous status is most critical is that of perception in living organisms. In an allopoietic explanation of perception, the nervous system receives input from its environment and acts upon it as information in order to represent the outer world. In an autopoietic explanation, the nervous system is a closed network where perception and action are inseparable. “Information” can be understood to be something imposed upon the organism’s environment and cognition an attribute of the nervous system’s operational closure. 

Another way of describing this more recent view of neuroscience is that perception is predominantly a construction of the brain (and total organism) rather than a direct experience that is decoded. Much of our perception of the world is a “best guess” informed more from memory and past experience than from minimal sensory data. In fact, the signals that come from our bodily sensors are amazingly impoverished perturbations to our living coherence. Our experience and perception of an “external” world is largely an inference. This understanding has profound implications for what we mean by the term cognition. It is less about having knowledge of the world beyond us, and more about how our knowledge of self arises and remains coherent through a structural coupling with that world. From this standpoint, Maturana has stated that the biological emergence of cognition is not necessarily dependent upon the existence of a nervous system per se: "Living systems are cognitive systems and living as a process is a process of cognition." A cognitive system occurs when an “organization defines a domain of interactions in which it can act with relevance to the maintenance of itself."6 

As previously stated, the organization of a living system is maintained by its constantly changing structure in a manner that both creates and conserves its relationship to its environment. It is therefore comprised by two domains of operation at two different levels, 1) the component level of its physiology, and 2) the organism level of its behavior. The result is a closed and autonomous self-regulating unity (operational closure) that is simultaneously open to its environment (structurally coupled). Living systems have dual ontological status through being neither fully separate from, nor fully belonging to, their environment. 

From the very beginning of the idea’s articulation, Maturana, Varela, and Uribe proposed the positive value of modeling autopoiesis within a computational framework. While the realization of virtual non-living autopoietic unities went on to become one important standard in the field of Artificial Life—demonstrating the potential for exploring the abstract nature of living systems in computational media7—the creation of quantifiable models of the concept has remained both problematic and controversial. 

One important understanding about the difference between living and non-living autopoietic unities can be derived from their differing standards for the use of the term homeostasis.  While we may speak of the search for—and explanation of—exclusive mechanistic processes in both circumstances, in cybernetic parlance, homeostasis refers to the ability of a system to sustain a stable state of behavior. In a biological context the term is now more often reserved to describe an autonomous and recursive network that sustains organizational stability through self-maintaining processes. While this distinction also points to an essential difference between underlying properties of open networks (machines) and those that are closed (living organisms), it also shows how soft the distinction really is regarding the capacity for machines and organisms to imitate—hence describe—each other. 

Despite the contributions that autopoietic explanation has made to the biological and cognitive sciences, the dominant scientific model remains representationist with “information-processing”—the assumption that an understanding of the phenomenal world can be essentially reduced to an exchange of pieces of information—the preferred explanatory framework. The power of this approach is undeniable and largely responsible for the current successful state of computer science and related research regimes. While this representationist paradigm of allopoietic description has traditionally been associated with the first generation of cybernetic explanations (most notably Shannon and Weaver) and an engineering bias, autopoietic descriptions have been referred to as a “second-order” cybernetics that emphasizes autonomy, cognition, self-organization, and the role of the observer in the modeling of a system. However, while it still forms the dominant paradigm for cognitive science, representationist description of biological unities seems less complete as we understand more about the nature of how living organisms create and sustain their internal worlds. Following Varela’s lead, I assert that autopoietic and information-processing explanations of the world are actually complementary views rather than antagonistic ones.  Nor are they each other’s logical opposites. Each can be used to describe both the open and closed mechanisms and networks of machines and living organisms but lead to important differences in perspective and methods. Varela also makes a similar point: 

“Clearly these two views (input and closure) are not contradictory, but the key is to see that they lead to radically different consequences, and to radically different experimental approaches as well… But what is clear is that in order to study life and cognition, we need to explore the almost entirely unexplored land of autonomous-closure machines, clearly distinct from the classical Cartesian-input machines.”8

Within this range of explanatory appropriateness, it is unusual for autopoietic descriptions to be applied to an understanding of the behavior of machines. The instances of this approach seem rare and therefore destined to be regarded more as metaphoric or philosophical expressions than as scientific explanations. We are so used to the information-processing model as the status quo for thinking about both machines and living nervous systems that—while many observers can understand the merit of applying operational closure to the latter—thinking about machine behavior in this way seems relatively strange. It is within this context of a much less familiar research regime that this current project can be situated and understood as an attempt to create a machine-based expression that straddles artistic and scientific categories of explanation. 

3) Technical Details of the Autonomous Circuits 

While the underlying conceptual approach to the design of these circuits has been that of autopoiesis, it has been my intention to assert their creation—as an autonomous artifact—to be an illustration of both metaphoric and mechanistic description. As scientific instruments they may be seen to fulfill this criteria when applied within an ongoing research regime as part of a mechanism to contribute to the potential disorganization of communication through mechanical vibration by certain arthropods. At a second order level of intention—where they are not applied as a direct research instrument—they may also be regarded as an illustration of pure research into the domain of autonomous analog computation. 

Ultimately the project explores the global behavior of hyper-chaotic analog audio circuits. The emergent complexity of these systems results from the dynamical attributes of coupled chaotic attractors interacting in a high-dimensional phase space. The control of circuit parameters determines a range of instabilities and structural couplings between nested chaotic circuits, allowing self-organizing behaviors to emerge. 

The numerical expression of the non-linear chaotic oscillators is as follows:9

This is implemented with the negative sign as the following circuit [NCO]. All resistors, except R, are 1k. R is variable.

 

Figure 2.

This circuit yields the following double-scroll attractor:

 

Figure 3.

 

Sub-circuits that make up the complete system:        

4 nonlinear chaotic oscillators

4 low frequency oscillators

6 low pass filters

4 line preamps

4 line amplifiers

6 multi-channel mixers

1 summing matrix network

4 opto-isolators

2 voltage regulators

 

A flowchart of the two primary autonomous units that are linked through a resistance network as shown in Figure 1:

Figure 4.

View of the complete system:

 

Figure 5.

 

 Layout and patching for the complete system:

Figure 6. 

 

4) Performance Instructions 

While these circuits will continue to behave autonomously for indefinite time periods if left unperturbed, my intention has been to optimize the continuous novelty of such behavior through a coupling to their physical environment (either sensor driven or by performer influence). The following algorithm describes the approach I have taken as a performer:

  1.  Determine the initial conditions for the oscillators through settings of the various control potentiometers.
  2.  Change these settings over time so as to exhibit a form of auditory neotaxis (orienting behavior towards the seeking of new behavioral conditions as evidenced through sound).
  3.  Continue to seek out changes that optimize novelty in as many parameters as possible (frequency, amplitude, timbre, rhythm, shape, ratio of sound to silence, transition, global patterning, spatial positioning, etc.).
  4.  Additional signal processing may be added between the signal output of the oscillators and their final amplification. This processing should be minimal and predominantly spectral in nature rather than temporal. Any such processing should not interfere with the generative oscillator behaviors.
  5. Additional control systems can be added to the original interconnecting resistance network between oscillators (potentiometers). These systems should be limited to devices that only change resistance values such as photo-resistors or similar sensors.

References

  • 1. http://silvertone.princeton.edu/~paul/lansky_beingdigital.htm.
  • 2. Schrödinger, E. 1992. What Is Life?. Cambridge, UK: Cambridge University Press.
  • 3. Margulis, L. & Sagan, D. 2000. What Is Life?. Berkeley and Los Angeles, CA: University of California Press.
  • 4. Varela, F. J., Maturana, H. R., & Uribe, R. 1974. Autopoiesis: The organization of living systems, its characterization and a model. BioSystems, 5, 187-196.
  • 5. Fleischaker, G. R. 1988. Autopoiesis: System logic and the origin of life. Ph.D. dissertation, Boston University, Boston, MA.
  • 6. Maturana, H. 1970. Neurophysiology of cognition. In Cognition: A multiple view, ed. P. Garvin, 3-23. Washington, DC: Spartan Books.
  • 7. McMullin, B., & Varela, F.J. 1997. Rediscovering computational autopoiesis. In Proceedings of the Fourth European Conference on Artificial Life, eds. P. Husbands & I. Harvey, 38-47. Cambridge, MA: MIT Press.
  • 8. Varela, F.J. 1984. Two Principles for Self-Organization. In Management and Self-organization in Social Systems, ed. G. Probst. New York: Springer-Verlag.
  • 9. Sprott, J.C. 2000. Simple Chaotic Systems and Circuits, Am. J. Phys. 68, 758-763.