On using tools made by comrades

Notes and memoirs on using digital poetry tools engineered by other digital poets.

Funkhouser using Tisselli's Midi Poet tool

With regard to digital tools, there are makers and there are users. As an artist, I feel lucky to be living at a time when artists who are making terrific tools make them available to use. Without any sort of conscious (premeditated) intention to do so, I began a period of intense creative “collaboration” involving software programs made by other digital authors (Jim Andrews, Charles O. Hartman, Eugenio Tisselli, Andrew Klobucar/David Ayer) in early 2009. These poetry-oriented tools have been part of my practice in various ways since.

Results, from my perspective, have been extremely positive. Will writing about the experiences of using them have value? I do so to provide compositional examples to the larger community, to show how working with such tools holds a range of possibilities for asynchronous—if not autonomous—collaboration people are capable of across the network, despite physical separation. Works described below are successful experiments, outcomes of which surpassed my expectations of what would be achieved. In all of them, personal filters are present, in most so is direct artistic/authorial input. In two examples, output made with these tools is visually combined with other experiments (layered as montages with other animations) in performance settings.

dbCinema

I became aware of Jim Andrews’ dbCinema at or around the time of E-Poetry in 2007, where I heard his fascinating description of the program. Attempting to find it online, however, I discovered that the qualities of the online version available (at the time), while interesting, were incomparable to what the (full) version Andrews talked about was capable of doing. As I began to write about his work for a new monograph on digital poetry, I approached him and asked if he could provide a copy—which he gladly did. In February 2009, while preparing a presentation titled “Poems of the Web, by the Web, for the Web: The Google synthesizers” for a conference at Stevens Institute of Technology (Science, Technology, and the Humanities: A New Synthesis), I began to closely study the program. Because of its cannibalistic attributes (acquiring visual content via Google, Yahoo, or other sources—as well as enabling one image to dissolve into, or have the appearance of devouring—another), dbCinema fits into a theoretical framework I (and now others, e.g., Roberto Simanowski) have cultivated for digital poetry in recent years. Thus I decided to discuss further my interest in the program at a presentation at E-Poetry in May 2009 (“Creative Cannibalism Remix: Authors & Network as Banquet”), and since I like the program so much also decided to make it a part of my performance at the same festival.

This is where the real fun and challenge began. Thanks to Andrews, who created an online tutorial describing the functions of the program, and made numerous sample files available, I was able to understand some of the things dbCinema could and could not do. Then I decided what animation effects most appealed and learned how to adjust the font, color and textual schemes as well as geometrics and speeds of presentation. Doing so, I had to keep in mind what would best coincide with the other layer of animation—an anagrammatic “text-movie” of mine that featured a pair of kinetic anagrammatic poems (“Barcelona Dreaming”, “Brewing Luminous”) accompanied by a soundtrack made with processed samples of Cecil Taylor’s music—with which the output of dbCinema would be juxtaposed during performance.

Using dbCinema, one uses (creates/invents) “brushes”, which can be temporally sequenced using “playlists” (the output of two brushes can also appear simultaneously, by proper manipulation of the playlist, which adds visual complexity). For the show at E-Poetry, Andrews provided me with a set of brushes, three of which I used exactly as he made them (“barcelona1”, “barcelona_history”, “antonio_machado”), and three of which I adapted to thematics and aesthetics of the performance I wanted to inscribe through dbCinema (“luminous_epoetry”, “cannibalism1”, “dreaming”). Each of these brushes made use of the “epicycloid” geometry feature, which presents moving text/image in a circular pattern on the screen, which worked well in conjunction with the horizontal and vertical orientation of the text presented in the second animation (see Figs. 1 & 2 below). I sequenced the brushes, using one of them twice (juxtaposing it with another), into a fifteen minute long continuous display. During the show, the two animations were simultaneously projected as a montage, accompanied by a reading of poems inspired by my participation in the Flarf Collective. I repeated this performance later in the summer during a set at the WordXWord Festival held in Pittsfield, MA, where I changed the spoken content (reading text from You are, therefore I am) and also (sparingly) accompanying the words with notes played on a one-string Appalachian folk instrument known as a canjo.

The second iteration of my work with dbCinema appeared at a conference called “The Network as a Space and Medium for Collaborative Interdisciplinary Art Practice” organized by Scott Rettberg at the University of Bergen. My approach to using the program, while giving it the title “Psychographic Poetry”, was much the same as before. I created a new animation to juxtapose with dbCinema for the occasion (“Norway Delicate”), and read from a Twitter-based manuscript called “Broken Bone Mind”. In Bergen, most of the dbCinema parameters were changed; the brushes I created, partially modeled after the previous set of brushes, were titled “delicate-poetry”, “bergen1”, “bergen_history”, “lightly”, and “espenaarseth”. Since I was presenting a new paper about cannibalism at Bergen, and discussing Andrews’ program, I also used the “cannibalism1” brush in this mix. As at E-Poetry, each brush configured verbal and visual information in a circular pattern. Some of the relations in the brushes are straightforward, others not. “Bergen1” matches the name of the city with results of an image search on the city; “Bergen_history” matches the word “history” with images of “Bergen history”; “cannibalism1” displays an animation of the word and images culled from a search of the word. To add poetic intent, however, “delicate-poetry”, matches the word “poetry” with the image results for “delicate”; “lightly” matches “lightly” with images tagged with the word “dreaming”; and “espenaarseth” matches the word “cybertext” with images of Espen Aarseth. Full documentation of this performance is online at http://vimeo.com/7870645.

A month later, as “Digital Poet in Residence”, I did a short performance at the Bowery Poetry Club as part of a show called “The Cartesian MathArt Hive”, in which I read poems and interspersed canjo playing as a pre-programmed anagrammatic animation composed for the occasion (using the other artists’ names) was projected and while dbCinema worked its magic with brushes titled “cartesian” (which searched for “Bowery Poetry” images and animated the word “poem”) and “mathart” (hive images, “mathart” as text). The visuals and content of the “cartesian” brush were unlike anything else I ever presented with the program in a performance. This brush emulated something I had seen done by mIEKAL aND in one of the sample brushes provided by Andrews. Instead of using the epicycloid function, I used the “random” geometry setting, which has the effect of “rubber stamping” the text in various places across the screen, a technique that adds overtly impressionistic qualities to the presentation of imagery (see Fig. 3 below). The text in this brush was also different than anything I’d ever used before: in every other instance, I placed a single word in (on) the brush; here I pasted in an entire poem I made with anagrams made from the phrase “Cartesian MathArt Hive”, which gradually appear as the brush runs its course.

During my last show with dbCinema, at the In(ter)ventions gathering at Banff in February 2010, the program was used more strategically. Instead of running the streamed, kinetic images throughout the entire performance, I selectively positioned segments containing this layer of imagery on the screen. I made two brushes, “banff_br” (Banff images, Banff text) and “inter_br” (“intervent” images, “in(ter)verntions” text), which each appear once on their own and are shown together for the final two and a half minutes of the set. During the entirety of the performance I sang a digital poem I had tuned, “Candelabra Abaft Fan (A Banana Flab Crafted)”, while playing the canjo. One interesting thing that happened, that I noticed on this occasion (I had not noticed it in Spain or Norway), was that when I ran the program in Canada, the images that came to the screen via the network were completely different than those I was getting at home (New Jersey) throughout the entire rehearsal and preparation phase. This did not matter, but surprised me and was a reminder that the information that’s delivered to one group of people can have an entirely different hierarchy than that delivered to another.

Andrews’ program is unquestionably capable of creating incredibly beautiful images in (significantly) many different ways. The virtual paintings made by dbCinema happened to match well with the aesthetics of my animations, and the twinned hypnotics of the two joined well together. While I do not use dbCinema onstage any more, I sometimes just turn the program on, enter some parameters, and marvel at what it can do—and what can be done with it. The good news is that a very good online version of the program, as well as a tutorial on how to use dbCinema are now available at http://www.vispo.com/dbcinema/sw/index.htm, so anybody online has access to a recent version of the tool.

PyProse

I have been a student of Charles O. Hartman’s work as a poet, programmer, and critic for the past fifteen years. In the mid-90s I had a copy of Prose, one of the programs discussed in a chapter of his 1997 book Virtual Muse: Experiments in Computer Poetry, and discussed several of his experiments in my dissertation (and later in Prehistoric Digital Poetry). The purpose of Prose was to create an endless series of varied, syntactically correct sentences.

Long story made short: On March 20, 2009, I received an email from John Cayley encouraging my participation in a “collaboration merging performance and digital literary practice” that would transpire that evening as part of a performance of Judd Morrissey and Mark Jeffrey’s The Precession (@theprecession) occurring in Providence. Essentially, it was an invitation to write (on Twitter) in response to texts being projected while the performance was happening. I was drawn in by the script Morrissey had written, which recycled and fused previously posted texts, and contributed a few lines. This was my introduction to Twitter, which I had not seen before.

I wanted to do one thing with this experience: explore whether or not (or how) such a popular “social networking” tool could be used by poets (digital or not). After a few days of pasting in obscure lines of poetry, or phrases I had picked up on Flarf, I came upon the idea to use Hartman’s updated program PyProse (Prose re-engineered using Python) in this capacity—to present randomly generated lines that were, conceivably, poetic. Since I am of a school believing that generated output is mutable, I could then also use the project to test Hartman's idea (stated in Virtual Muse) that his generator "could be treated as a first draft writer". I started using the program every morning (sort of as a brain-gym type kick start) to produce some lines which were then fashioned via close editing—often to conform to the platform’s constraint—into Twitter posts.

The ongoing, cumulative text can be read at http://twitter.com/ctfunkhouser. In addition to other benefits, the program brings words I never use but like as well as strange modes of logic out of the blue that I—in turn, in a much disciplined way—force myself to respond to and shape into meaningful expression (or at least something that is interesting and sounds good). This work is automatically generated, and then filtered through my mind and sensibilities. At a certain point, while preparing for the WordXWord festival, I realized I was on my way to creating a formidable manuscript (with a central theme of struggle), was enjoying the process of doing so, and gave the project a name (borrowed from a phrase from a Buddhist text I was reading), You are, therefore I am. Within the project, there are some distinct sections, such as “Broken Bone Mind” (mentioned above) and Back Up State (a chapbook I produced while on retreat at Art Omi; see http://web.njit.edu/~funkhous/backup_pdf.pdf).

PyProse is open source—at one point I made some minor adjustments to the code (having to do with the number of times it produces questions). My daily practice in using the program has really only changed in one significant way. In December, after roughly eight months of posting, I altered the way I chose the texts delivered by PyProse. Instead of looking for short(er) sentences that flow well together, I decided to use only sentences the program generated that surpass (in length) the 140 character Twitter constraint, so that every post would be one line. What follows are some examples of the degree to which some of the output needs to be altered, illustrating how I am given the opportunity to tune and really shape the expression. On May 16 (2010), I edited the generated line, “A knife (the good emphasis) was its cloud between a suggestion and the tournament; and after we were these escapes along the rank, to swerve was so colonial a street, and the volumes' telegraph between a realm and the vehicle (the insisting hall) volunteered someone” into, “Nice cloud suggestion: after escapes along rank, swerving so colonial a street, volumes' telegraph realm, vehicle (insisting hall) volunteer”; on June 9, “A vise was their screen between this city and the fool; before its condition was tension, so brief an island (her feature) may celebrate, and the fault between the policeman and the disaster similarly rolled” became: “Vise as screen, city & fool; before its condition tension, so brief an island may celebrate, & fault, policeman & disaster similarly roll...”. At this point, in the manner I am using the program, the editing process is far from trivial, although it may have been more so when the project first started. As a point of comparison, in the summer of 2009 I had a student who, using Python, made an app that automatically posts a line of PyProse output to Twitter (see http://twitter.com/autoprose), and while I certainly appreciated the project, the quality of creative expression presented did not poetically amount to the same thing I was able to achieve by implementing significant personal interaction with the program’s output.

I love this project because I came to use the platform in a unique way. The kinds of statements made in my “tweets” are unlike most anything else that appears on this network. To those that follow me via Twitter (or by Facebook, where all my tweets have been directed via an app I installed in September 2009), I offer these daily odd, sometimes mind-twister utterances, which often look like normal sentences but are not. At present I am still working out some of the details of the significance of this project, which I am thoroughly exploring in a manuscript dedicated to my various activities on Twitter (which also include pedagogy and “normal” use). Whether or not an orchestrated or predominant theme emerges from the accumulated content, the discipline and challenge of my practice in this area has been worthwhile. I know this because no matter how many times I have tried to end the project it persists. Critically and creatively speaking, I will value seeing future experiments by poets engaging with generators as first draft writers (and making something new with them). PyProse can be downloaded from the Web via http://cherry.conncoll.edu/cohar/Programs.htm.

MIDIPoet

Though it was released ten years ago, I learned about this program in 2008, when its creator Eugenio Tisselli and I participated in the same literary arts festival at Brown University (Interrupt). At a performance at AS220 in Providence, Tisselli used his program to propel a digital poetry performance (featuring graphics, text, and gesture) with a mobile phone—an approach to presentation he also used at E-Poetry 2009. Having known about MIDI-based art since the mid-90s, when friends of mine studying with George Lewis at RPI’s iEar program were coordinating sound and video through MIDI, I was intrigued that a digital poet had engineered such a tool. I used MIDI only once before, when an audio engineer who produced some recordings of mine at Multimedia University programmed a MIDI keyboard to collect samples of my voice reading lines and made a sound poem with them (http://web.njit.edu/~funkhous/selections_2.0/content/mmu.html). Tisselli’s presentation at Interrupt planted a seed in my mind, a potential direction to explore on my own at a future date. In 2010, in part due to stagnation I felt occurring in my own work (coupled with a sense of mediocrity in performance I sensed in many—but by no means all—digital performances), that date finally arrived.

Deciding to use MIDI was easy; actually figuring out how to do it was not. Before beginning to make work I not only had to determine what instrument to use, but also how to deliver effectively sound signals into the computer/software to produce results. I already had an external sound card to connect to a laptop in my studio, but had to find a way to convert sound to MIDI. My options for (readily available) audio sources were voice, electric guitar, bass guitar, and canjo. Over the course of several weeks, I consulted with a few people, including a local guitar technician, one of my friends and collaborators who was at RPI (James Keepnews), several experts at the Sam Ash store in New York, Tisselli, and (of course) resources on the Internet about the matter. Due to technical complexities, voice and canjo were eliminated as possibilities for the time being, and since I came across an inexpensive pitch-to-midi converter made for the bass available on the Web, I decided to try it out. I had my bass repaired, and ordered the hardware.

MIDIPoet contains two components, Player and Composer. As you might expect, works are made in Composer and externalized through the Player. The first time everything was hooked up, on April 21st, I could not get MIDIPoet to work at all but could, using my regular audio software (Ableton Live), tell that the MIDI hardware (and using bass as input mechanism) was working. With some help from Tisselli I figured out the problem (assigning the proper input port in the Player), and was then ready to go. Fortunately, during the time spent solving the tech/hardware issues, I came up with an idea of what I wanted to do to begin with, collected samples, lines of poetry, composed some basslines, and formulated the animation I’d use in conjunction with the MIDIPoet output. I sent five lines of sample text to Tisselli, who, astutely, sent me a generic MIDIPoet (.mip) file and, without further instruction as to how to use the program, advised me to start tinkering with it. Having this type of input, encouragement, and support from a peer—as I had with Andrews—was invaluable. Within a day I had already finished configuring the MIDIPoet aspect of what I presented at the Electronic Literature Organization festival in early June, which involved solving issues of harmonics (string vibrations from the bass) and text placement (then all I had to do was practice the formidable task of singing and playing at the same time). Having a grip on how the program worked, I naturally wanted to explore more—and still do. Fairly quickly, I came up with some ideas for MIDI poetry bass formulations, and on May 4, I had a studio visit with a journalist who requested to come watch me create a digital poem. Feeling confident enough, I used the occasion to prepare my second piece, later adapted for presentation at ELO. That same week, online dialogs with Tisselli ended up leading to another new piece.

While all of this was going on, I arranged to have a dress rehearsal performance in New York, concurrent with a visit by Lucio Agra (from São Paulo). Lucio and I shared a noontime bill at the Bowery Poetry Club on May 22. Thanks to a productive initial week with MIDIPoet (plus a couple of weeks to rehearse) I would be ready to test the performance waters using these new tools then. I did, and with mixed success. Just before my set started one of the two projectors malfunctioned, so the show was bound to imperfection. I mainly learned that it would be better (for my voice and for performance values) if I stood rather than sat while playing. This leads to my presentation at ELO, where I did three pieces (all rooted in files configured during my first week of involvement and subsequently refined). In the first, I took advantage of the harmonics, which to my ear and eye create this type of “rolling thunder” of sight and sound. One of the happy accidents in using MIDIPoet was learning that with the simple adjustment of a single knob on my sound card I could make the bass sound also like a broken (distorted) piano. I exploit this function in the first two pieces presented at ELO (see http://www.youtube.com/ctfunkhouser#p/a/u/1/L5kffUObtN0). In the song I sang at ELO, MIDIPoet randomly selects and places on the screen one of the lines stored in a database each time a note is played on the bass (seen as yellow text in the figure below). The primary ingredients of the projections are described alongside documentation of the performance (at http://www.youtube.com/ctfunkhouser#p/a/u/0/t9PkkqOzCf4), but I should note here that all of the lyrics for the second verse of the song stemmed from output from PyProse. I syllabically analyzed the lyrics from the 1991 thelemonade version of the song (verse one) and selectively arranged poetic phrases or samples generated by Hartman’s program to fit the rhythm of the music being driven by the looped guitar sample from the original recording. Also worthy of note is the fact that the linear Flash animation (text-movie) that appears as part of the visual montage was made from the file of the Flash movie that was a component of the show at Banff. Since there were the same number input letters in the phrases “Banff Alberta” and “Providence RI”, I did a 1:1 repurposing of the file, to which I also inserted about 130 candidate words—which Flash on the screen at various points—and changed its color scheme and speed (framerate). Since I had to spend so much time learning, troubleshooting, and refining the MIDI component, it was good that I only had to spend a couple of days working on the Flash aspect.

I was surprised to learn from Tisselli that few other artists (namely Mitos Colom and a VJ collective called Telenoika) have seriously engaged with MIDIPoet. The concept of using a musical/sound interface to push digital poetry into sensible new aesthetic realms seems somewhat important to me, and I look forward to seeing the results of more experimentation in this area by others. To find out more information about, and download a copy of MIDIPoet on the Web, go to http://www.motorhueso.net/midipeng/.

GTR Language Workbench

I did a few experiments in 2008 using Andrew Klobucar and David Ayre’s program GTR Language Workbench, which I learned about from Klobucar when he became a colleague at NJIT. One of the pieces was a mashup of Young-Hae Chang Heavy Industries DAKOTA, which I performed (by candlelight) that year at Interrupt. In early 2010, I again used the program at and immediately following my trip to Banff (in part motivated by the fact that Klobucar developed the program while in residence there).

In January and February Maria Damon and I wrote several collaborative poems, which sometimes began with a series of edited lines I’d generated with PyProse. Some of these collaborations, particularly after she and I began to write with Brian Ang at Banff, made use of the Workbench. In specific, I used two of the processors embedded into the Substitution menu within the program: “Instantiate” and “Mad lib / S + 7”. These processors enabled me to automatically reconfigure input text. Aside from the expectable (wonderful) Oulipian results, I made a discovery while using the “Instantiate” mechanism. The interesting feature of this processor is that it lets you replace a certain amount of the nouns, verbs, adjectives, and adverbs of one text with those of another. The program (according to its creators, with whom I inquired) expects these variables will add up to 100%. What I ended up doing at a certain point, by accident, was ask the program to perform at a rate of 200%, which resulted in the output of some seriously broken texts that featured the inclusion of many stray stand-alone characters and letters. Most of the time this type of output was not very interesting, although one piece, based on an essay by Charles Bernstein (blended with a text created by Damon, Ang, and I), turned out nicely precisely because the stray letters and numbers became agents (or subjects) within the narrative of the poem (see http://web.njit.edu/~funkhous/Damon_Walk.pdf).

As is also the case with dbCinema and MIDIPoet, the versatility and utility of the GTR Language Workbench is hardly at all captured in my brief recounting of the experience of using it above. I highly recommend any one who is interested in text processors to investigate the program. We may not find ourselves (our “voices”) in its output, but in the mechanically driven finding of something else, discoveries and new meanings can be made. For more information about this project, see http://www.gtrlabs.org/projects/workbench; to download a copy of the program, go to http://www.gtrlabs.org/temp/install.exe.

Use of other programs

In recent months I also used Wordle to make a series of visual renderings of poems of texts collaboratively composed with Maria Damon (in which, as noted above, PyProse also plays a role). These are available via Flickr at http://www.flickr.com/photos/the_funks/sets/72157623219549807/. Tweetcloud was used to create a couple of poems I read at the MathArt event. The base texts for the anagrammatic poems featured in my text-movies I use the Internet Anagram Server (http://wordsmith.org/anagram/index.html). For all of the text-movie animations I have made since 2007 (see http://web.njit.edu/~funkhous/t-m10.html). I use Google, of course, to compose Flarf, and Adobe Flash as a multimedia synthesizer, finding it to be very supportive of my anagrammatic constructions—making it easy for me to set letters up on a timeline and plot their actions, arrangements, and movement in conjunction with sonic and visual elements.

Endnote

In 1986 I asked Ed Sanders about his vision on the music of the future. His answer is interesting, and speaks directly to a point of consideration worthy of value to digital poets working in tandem, with each others’ tools (as opposed to those provided by corporations):

It has to be, in electronics, the equivalent of the piano forte. That is, right around the time of Bach they were creating this new kind of piano, which was an outgrowth of the harpsichord, that allowed its player to be infinitely more expressive, using the pedals and playing softly and loud—it enabled the concept of the concerto to arise, where the piano was an actually powerful instrument that could act in concerto with other instruments. So what's going to happen now...is the electronic equivalent of the piano forte. That is, there is going to arise a musical instrument sufficient for a new Beethoven, and it will be an electronic instrument. It will have, obviously, many aspects of the modern electronic recording studio and modern high-end synthesizer. I envision it like a giant church organ only instead of stops it will have fifteen or twenty thousand little buttons or knobs & x-y pads & pressure sensitive areas & theramin-like devices where you approach these little knobs with your hands. The proximity of your fingers to these zones & tiny little surfaces will indicate parameters & programs, moods & sounds, or whatever....It will be a "touch" thing; I guess the feet will have to be involved...in other words you'll have to use both hands, both feet, & perhaps a group of assistants. In fact it may be a collaborative thing....It will be complicated...you can use your touch to modify all these parameters instantly...make these sounds, these different layers of sounds, different sounds & chords instantly, as you create it....

In the end: if a creative person spends a large amount of time crafting a tool which others may use, I believe it is worth our time to explore these tools. Using the programs described above makes me not a weaker artist but rather enables me to be a stronger one, to expand and dilate my ideas. For many months I have used them in concert with my own poetic impulses, as accompaniment to my own projections. And, truth is told, with the exception of PyProse, what I have done (exteriorly) with the technologies and possibility available within these programs only represents a small fraction of what can be done. What someone (anyone) else might choose, or be able, to do with dbCinema or MidiPoet has the potential (if not probability) to be completely different (and superior) to my efforts. Unlike Flash or Photoshop, these are tools, such as Joerg Piringer’s wonderful iPhone app, abcdefghijklmnopqrstuvwxyz, which are made to be used for the creation of digital artworks. Users becoming makers, consumers becoming producers, is verb(al). One is, and then acts. Such activity is a hopeful trajectory for the digital word.

Figures

Fig. 1. Chris Funkhouser, dbCinema screenshot, 2009

 

Fig. 2. Chris Funkhouser, performance at E-Poetry 2009 (May 2009). Photo by Adam Parrish
Fig. 3. Chris Funkhouser, dbCinema screenshot, 2010

 

Fig. 4. Chris Funkhouser, Multi-MIDI-a performance at Brown, 4 June 2010. Photo by Amy Hufnagel

 

Fig 5. Chris Funkhouser, Multi-MIDI-a performance at Brown, 4 June 2010. Photo by Amy Hufnagel

5 Responses to “On using tools made by comrades”

  • mlgodwin: June 17, 2010 at 2:54 pm

    chris, i haven’t read this posting yet, but here’s my appreciation for your taking the time to write to this topic. it appears to be at least part of the information i might have gleaned from you had we had the opportunity to talk while still at conference in providence. … leaving that conversation for a “part two” moment, here’s a “part one” thanks. ~ mg

  • eugenio: June 18, 2010 at 9:55 am

    Hi Chris,

    Your article has made me think about my own relation to MIDIPoet. Yes, it’s been (more than) ten years. And I feel I have probably neglected the tool. Nevertheless, I am happy with things so far: I haven’t exploited MIDIPoet’s full potential, and I haven’t “promoted” it enough, so that more people could use it. But what has happened has been quite nice. In 2006, I was invited to China, and I got the chance to do a small MIDIPoet workshop there. I was thrilled to see the participants, with whom I could not communicate verbally, use MIDIPoet to create their own pieces with chinese ideograms. On that same trip I did a performance in which I used latin music as soundtrack. I got everybody dancing by the projection! Nice things also happened in Goa, India, in 2007.

    I have also thought about the tool itself. When I invented the first version of MIDIPoet in 1999, it was really a new thing. But very soon, lots of other (better) tools arrived, making MIDIPoet almost obsolete (or so I thought)

    Now, after so many years, I come to realize that even if what you can do with MIDIPoet you can do better with PG+GEM or MAX+Jitter or Flash or Processing, each tool has its special “flavor”. Each tool pre-determines a particular set of results which can be obtained only by using it. These results are not hard-coded within the tool, but they sort of “emerge” after use: the tool’s own discourse, if you allow me. So now I know: it is precisely MIDIPoet’s limitations that make it special (at least for me)…

  • Jim Andrews: June 18, 2010 at 10:19 am

    Thanks, Chris, for all your support concerning dbCinema.

    I think what I’m going to do, soon, is make both the source code and the desktop version available for free. I kind of wrote myself into a corner with dbCinema. The last few versions were difficultly buggy. So what I’ll release is a somewhat earlier and much less buggy version.

    With these sorts of tools, you try to create a flexible and scalable architecture. And then develop it till the architecture starts to groan and threaten to collapse. Then you back off and call it a day.

  • Eugenio: June 19, 2010 at 3:16 am

    Jim, I can totally relate to what you say. Although I developed MIDIPoet in 99, I only released it in 2002, when the bugs were “small” enough to make it public. But, to tell the truth, the internal architecture of MIDIPoet is still groaning badly… yet it works.

    I think that the best thing to do in these cases is to open up the source code, so that someone else can help with the development. Hey, maybe someone will like your tool and give you a hand to make it better!

    I’m looking forward to your release of dbCinema …

  • Jim Andrews: June 19, 2010 at 3:25 am I developed dbCinema with Director. Which is a really fabulous artistic tool. Just beautiful, really, as a tool for the projects of individuals.

    But, correspondingly, it has its limitations. It isn’t an object-oriented language and the timeline is um too much with us. And there are other problems with Director such as a hard limit of 1000 simultaneously instantiated sprites. Director has been around since 1987.

    But it’s always felt to me like the developers of Director focussed on building a tool for software artists, and they did an excellent job, for the time.

    I’ve been trying to learn Flash. It is so not so much fun!!!! It has received the corporate treatment the last few years and is now fully corporatized. Wow it just ain’t no fun mang. Not sure if I can talk wit it.

    So I’m kind of confused, at the mo, as to which direction to go.