This interactive digital art is a collaborative installation piece conceptualized and coded by Jennifer Ware and Kevin Brock using the Arduino Mega and Processing to demonstrate the translational power of computer programming, sound, and electricity. This project was part of an interactive presentation at the Computers and Writing 2011 conference in Ann Arbor, Michigan and has also been proposed as an interactive sensory art installation for the Contemporary Art Museum (CAM) museum in Raleigh, NC. The installation involves a series of physical arduinos, breadboards, LEDs and speakers along with Processing sketches and a computer interface for textual input to explore user to code interactions that transform text and recognizable language forms to sound and light.
This project was presented at Computers and Writing 2011 and was modified to include a twitter API call in which users, during the performance, directly interacted with the installation by sending text tweets with a specific hashtag. These tweets were then pulled into processing and reinterpreted through the arduino. This interactive performance is an outgrowth of the performative capabilities we proposed for the installation at the Contemporary Art Museum (CAM) in Raleigh, NC.
This project consists of multiple levels of user-to-code interaction through the manipulation and execution of three Arduino sketches that highlight the similarities and differences that are inherent within a text based system of communication. By using the sketches in the following order (1) Anderson to Tone: A Primer, (2) Citation to Tone, and (3) Tones of Discourse, users are first oriented to the capabilities of the Arduino, and asked to participate at deeper systematic levels. Visit the playlist to hear each of the projects that are described in further detail below:
ANDERSON TO TONE: A PRIMER (1):
This processing sketch is a musical primer that introduces the concept of Citation to Tone to those unfamiliar with Arduino and processing capabilities. The composition uses a citation from one Professor, Daniel Anderson from UNC-Chapel Hill and musically reproduces the citation in three forms: MLA, APA, and Chicago Style. Within the sketch (the coded materials), users are asked to choose which of the three options to play, and can play all three citation styles in any order they choose. After users choose, they are provided with a musical representation as well as a visual on-screen display of the citation. Users are encouraged to play the citations multiple times so that they can trace the similarities and differences between the citation styles in the musical form.
By focusing on one citation, a publication from Kairos 15.1, the project artists highlight the similarity within the textual citations by enabling users to hear the citation styles and musical patterns that emerge from the Arduino.
CITATION TO TONE (2)
Users are asked to input a citation of their choice (or other text of their choosing) into the serial monitor of the sketch and are provided with a cascading display of the citation as processed by the Arduino. Simultaneously, the Arduino outputs text to sound by following a coded set of tonal values within a 7-octave scale. Both the cascading display and musical composition enables users to see the letters as they are simultaneously translated into sound. As users continue to input citations, they can reflect on the similarities and differences between the textual input and reflect on any musical patterns or tones that emerge. Users can also playfully experiment by inputting smaller phrases and terms in order to train their ears to the sound translation process.
TONES OF DISCOURSE (3)
This sketch affords extensive interaction at the code level of the Arduino project. Users can input citation strings as well as larger portions of academic texts into a specific portion of the code and are also encourage to experiment with altering the tonal timing of the note values. This level of interaction between the user and the code enables new types of musical compositions to be created and affords the user a participatory and creative exchange of input with the Arduino hardware. Because not all users will inherently know the specific points within the code that they can alter, a basic instruction manual will be provided next to the project display as well as a sample citation list should users wish to experiment with multiple citations.
Description of 'Tones of Discourse: The Arduino Project'
Our project consists of a collection of experiments with the Arduino microprocessor that serve to demonstrate the translational powers of computer programming, sound, and electricity. Specifically, we have sought to explore how Arduino can interpret textual input (for our project, we used excerpts of academics' citations and critical texts) into forms of sound that are explicitly unrelated to conventional human language. This means that we avoided “text-to-speech” translations and instead, recognizing Saussure's insight regarding the lack of inherent relationship between signifier and signified, we experimented with an explicit re-signification of visual characters as arbitrary tonal values within a 7-octave scale. As a result, we uncovered musical patterns highlighting similarities and differences that form from individuals' writing styles and the content of a specific text, effectively providing audiences an opportunity to interact with critical theory in a wildly different way than would normally occur. This project can be located within the fields of contemporary art and the emerging critical study of sensory media. By creating an experience in which digital media criticism and artistic aesthetic converge, we offer audiences a chance to question what exactly occurs when we “hear” (literally or otherwise) critical approaches to aesthetic media. As Caroline Jones argues,
The only way to produce a technoculture of debate at the speed of technological innovation itself is to take up these technologies in the service of aesthetics. Aesthetic contemplation buys us time and space. Aesthetic practices locate how bodies are interacting with technologies at the present moment, and provide a site for questioning those locations.
It is precisely this sort of interaction that intrigues us: how do we relate to the technologies of daily life? What do we see as unfamiliar? What happens when we are forced to experience a mundane quality (in this case, written language) as incomprehensible bursts of sound? Without easily located contexts through which to interpret and situate this new sensory experience, we are left in a position that demands an unpacking of time and space so as to more fully understand what, exactly, we are doing in response to the sensation.
The use of computer code (the Arduino programming language) to enable these translational efforts situates our project alongside the Oulipo and its desire to explore the potential power of literary and linguistic combination and construction. Just as the Oulipo provides readers with a set of tools (and a hinting at many, many more still unrecognized) with which to anticipate new methods of writing and rewriting, so do we provide readers with a set of tools to anticipate new methods of multimodal communication and interaction. Marcel Bénabou notes that “[o]ne must first admit that language may be treated as an object in itself, considered in its materiality, and thus freed from its subservience to its significatory obligation.” It is not a marking of any boundary of communication, but instead simply the acknowledgment of another mode available to us – and it is one that has been liberated, in a sense, from traditional constraints of what language “does” when we experience it in action.
By treating the textual input as a freed object and thus as transferrable to other modes of communication, we are able to “see” what language “does” when it is transcribed through different structures and other forms of language, particularly programming and code. This transference into other modes provides us with insights into how we can understand the unfamiliarity within new technologies through the search for similarities and differences with the methods of communication. In addition to focusing on the sensory level engagement of the Arduino, the tonal output values from the textual input, we also explore the structure of our code as a way to connect our project to traditional English and Communication disciplines in a new and inventive way.
John Cayley argues that “because code has its own structures, vocabularies and syntaxes; because it functions, typically without being observed,” language and code must therefore be viewed as having distinct strategies of reading. Through our exploration of writing code for Arduino to change what language “does,” we question the separate and distinct strategies that Cayley describes and attempt to make a comparison to the commonalities between the structure of our code and the structure of argumentation found within writing styles highlighted within first year composition and public speaking courses. If we use our understanding of the rhetorical situation as utilized when composing a speech or writing a paper as a type of transparency overlay to explore the nature of code, several similarities become visible. We can use these similarities to demonstrate the value of programming and code to scholars who do not work with new media.
By considering the Arduino as the audience for our code composition we are then able to observe noticeable commonalities within the structure and form of code writing and composition. For example, within our code composition we present an argument to Arduino in which we must first define and situate our goals within the constraints of the programming system. Our introduction, similar to an introduction within a paper or speech, seeks to orient the audience to the direction we are about to go. Within our code, the introduction can be seen as the defining of all of the variables within the sketch that are needed for the Arduino to understand the entire argument. A portion of our “introduction” can be found below:
int soundpin = 6;
int button = 13;
int white = 10;
int red = 7;
int orange = 5;
int yellow = 9;
int green = 4;
int blue = 8;
Next, as we present evidence to the Arduino to strengthen our argument in order to provoke a particular response, we must do so in a specific manner so that the evidence makes logical sense to the audience. Just as writers of papers and speeches must creatively arrange their evidence in particular ways in order to strengthen the argument, we as code composers are also aware of how the constraints of the code programming language enables a variety of ways to structure evidence that is presented to the Arudino. While the Arduino cannot “choose” whether or not to respond in a given way to our code argument, we do have multiple means of making our persuasive case
Figure 1 – Evidence presented in two forms.
Realizing that different arrangements of evidence have the ability to provoke new responses within a speech or text, we note how this also connects with how evidence is presented to the Arduino that will shape its response. For example, if we were to write code to implore the Arduino not to act upon certain evidence until after a button had been pressed, we must structure our argument in such a way so that the Arduino “hears” our button code first. If however we present the button evidence later within our composition, we can use that evidence to provoke a different response such as pausing in the middle of a process or returning to the introduction of the composition to reinforce and replay evidence presented earlier.
As a final example, we consider how the evidence within our code composition can be altered to create a variety of audience reactions. For this project we have framed our composition to be constrained to a 7-octave scale with specific tonal length values. We have provided the Arduino with evidence that enacts specific reactions for each textual letter, number, and many other characters. We have programmed logical outputs for each of these characters and presented evidence that will provoke a specific response from the Arduino based upon the characters input. However, those values, the tonal lengths, and even the 7-octave scale can be altered which will then produce unique reactions and output values from the Arduino. By recognizing how changing specific evidence or values can produce new inventive outputs from the Arduino we are further reminded of how evidence and structures within writing composition courses can also be combined and altered to create completely different arguments, arrangements, and responses.
Figure 2 - Case ‘a’ in two forms with tone and tonal length change
This presentation of commonalities between code compositions and writing within English and Communication disciplines is not meant to be exhaustive, yet is provided as a means through which a set of similar tools can be recognized within multiple fields. Our project offers creative ways to think about the canon of invention and how styles of argumentation and evidence can be translated into the work that programmers perform when they create code compositions for technological audiences. By establishing how our project fits within both within the field of new media criticism and highlighting similarities in structure to established traditions within English and Communication, this project will enable both new media scholars and those who do not work with new media to locate value within code and sensory media compositions.
About the collaborators:
Jennifer Ware is a Mellon Postdoctoral Fellow at the University of North Carolina at Chapel Hill. Kevin Brock is a PhD Candidate at North Carolina State University in the Communication, Rhetoric, and Digital Media Program.
“Introduction.” Sensorium: Embodied Experience, Technology, and Contemporary Art. Cambridge, MA: MIT, 2006. 2.
“Rule and Constraint.” Oulipo: A Primer of Potential Literature. Trans. and ed. Warren Motte. Champaign, IL: Dalkey Archive, 2007. 41.
“The Code is not the Text (unless it is the Text).” electronicbookreview.com/thread/electropoetics/literal, 10 Sept. 2002. Web. 21 Oct. 2010.