Eli Keszler and Keith Fullerton Whitman put out a really cool split LP with NNA Tapes earlier this year. Both of these gentlemen are high-grade experimental musicians, working within analog and digital frameworks to not only create new sounds but to change the ideas of what exists as music and what could be music. Keith became known in the '90s as a maker of what in that time was called IDM under the name Hrvatski. However, he stopped assuming this moniker, and making dance music at all, instead moving on to taking apart modular synthesizers in ever more adventurous ways. Eli Keszler could be referred to as a “sound artist”, a delighfully unspecific term. But it makes sense – he creates entire installations of sounds that are capable of playing themselves, including two recent fascinating ones: “Collecting Basin“, in which piano wire is strung to the top of a water tower in Louisiana and struck by micro-electronics to create a present tone, and “Cold Pin“, in which an assortment of wire and electronics are strung, arranged, and programmed in a gallery in Boston. Both of these men are interested in the role of the musician in music, and in Eli's forms, he is a ghost or a shadow, not really present.
We asked Eli and Keith to interview each other, because we had an inkling that beyond being slapped together on two sides of vinyl, they might also have an interesting amount of dialogue in the current state of experimentalism and also the specifics of each other's work. However, we might not have estimated that they would have quite so much to say; for your reading comfort, and for ours, we have split this interview into three parts, one published yesterday, and one tomorrow (on Wednesday, we will include a downloadable PDF of the entire text). Please enjoy part two, where Keith and Eli get a little technical.
Keith Fullerton Whitman: So, in lieu of using a piano, or an instrument, what are you using to prototype your music these days? Are you still deep in Arduino & processing?
Eli Keszler: Pretty much paper and my imagination. . . And then the coding, which I am still wrapping my head and around a bit to get a feel for it. At first I was using a little LED box that would flash the patterns back that I would code so I could get a sense of timings – basically like I used the piano before, but, at this point I have an idea about how many milliseconds means what sound, rhythm, etc. There are a bunch of levels to think of at once as you know when you are writing code. The hardest thing for me is the spacing, because with long strings the resonance is huge, even a four second or five second gap between attacks can be way to close. At the same time, I am now using the opposite type of setup quite a bit, with very short percussive sounds that are mechanized, and I often run into the opposite problem. I'm working on a new installation and large ensemble project that is going up at Eyebeam in Chelsea, with a performance on June 7 which is showing some of my new ideas. The whole mechanical component of what I'm doing solved a lot of problems I was trying to work out, I don't see it going away really.
Right; there are physical limitations to sounding objects. A hammer can't accurately strike a string that's still in motion. You can't press a piano key that's already down. But one of things I love about inserting systematic or process-oriented control is that the systems & processes don't inherently KNOW these things. The system will function perfectly whether the hammer strikes the string or not; its job is done. I'm curious about the high transient sounds on that first piece on the split. I listened to it a few times over the weekend.
Exactly, you can get a very raw sound, out of a very controlled system, it becomes real to me at that point.
It appears to be a computer-controlled variant on the sort(s) of rhythms your hands seem to create naturally when you're sitting in front of a snare drum. Do you see it as a dialogue between your system(s) and your self in real time? I.E., were you performing alongside the system itself or just a recording of it? And is there a feedback element that interprets your real-time playing reciprocally back into the system? There are so many “happy accident” type moments in the recording that would imply this.
I was trying to get a sound that would blend in perfectly with the attacks from percussion, for the reasons you mentioned, to get some break up between the tones, to get transients to occur, but with varied lengths and shifting as a process over time. I had to think backwards, as when I would try to code rhythms that would line up with what I wanted to play, it really didn't work, it didn't fit with the drive I was looking for. I eventually had to break down what I knew I was going to play on the drums into tiny units, and then code corresponding patterns into the micro-controller, and have them reconfigure themselves and reconfigure spacing length and so on. It's a little bizarre how close it sounds to my drumming. It made me understand some components of my playing, but that was sort of the point. It's a bit of a backwards process, but so interesting to break down language and reconstruct it like that, especially when it is your own. I haven't been using feedback systems yet with the mechanical projects. Though with this Eyebeam piece there will be a visual feed back system. The LP was all live takes – it's a really simple set up. That was part of the idea with the LP was creating massive physical pieces out of the smallest set-ups possible. One thing I really appreciate about what you said about what the computer knows and doesn't know. It doesn't understand about stopping and starting, only if you tell it to. It's a perfect environment for the happy accident to occur. The fact that the code is translating into a intensely physical form is so crucial. You really hear that transition, and get this intense physicality out of something that in process is so removed from physical space. That variant really is what makes it fascinating for me.
I've been fascinated with musically useful versions of randomness and the differences between generating randomness in digital software and analog hardware. A couple of weeks ago, Nicolas Collins sort of took me to task about my stance on “true” randomness vs. “digital” randomness at a talk at SAIC. His point was valid, that you can seed a random number generator in an infinite number of ways, and that the resultant sets were virtually infinite. But something about HOW computers decide on randomness has always bugged me. In the analog world, noise is literally infinite, non-repeatable; a defect, or “feature”, of a specific, unique capacitor. It's always felt like a cheat to have a computer “do” randomness. All it knows is ON or OFF. There's really no gradation at the atomic level. I love how that recording really splits the Arduino transients and the acoustic drum ones into discrete ranges. I could tune them using the tone-controls on the stereo in the office here.
Let me know how that goes. . .
Focus on one element, then bring in the other for the exchange.
With this type of computing I think of it very simply like you said, ON and OFF. To me, if you want a looseness and a sense of random, or natural flow to it, it needs that extra input from the world, which brings in that element on impact. We've talked before about this, but the difference between electronic music recorded in a room vs. directly into a computer is so drastic. You can really hear this in David Tudor for example, that music exists in a space for me, and that's what is so wonderful about it. It really brings this seperation into focus. Even though with computers, I'm very much a 'learn just what I need to know' kind of person, it seems to me that you can really hear in computer music, or anything involving digital technology, the pieces that the physical reality of what they were doing was kept in mind, and those that were caught up in the technical side of it. I don't always think its as obvious too, which pieces fall into which category The conversation of 'random' vs 'non – random' is interesting, it seems like this is still an argument about whether is valid in a way, which seems obvious to me that any process – controlled randomness, or randomness is a valid process. Randomness should just be a part of the vocabulary today, no different then anything else to me.
Interesting; so you embrace that – the cold, heartless flip-flop… the computer's decision-making process, vs. your own. Do you see the Arduino making executive decisions that are a canonic replacement for human pacing & timing? Or are they accurately executing an algorithm that you've put in place that defines the composition? Ok, maybe that's a side-track. Still, you are building complicated technical systems; there's no shame in getting caught up in how successful they're doing what you set out to do with them… or not.
The technical means in which I produce these pieces and sounds, to me is the least interesting part. They are interesting absolutely – but I'm thinking more about other components of it. Keep in mind they are only a small part of the variables, because I'm putting live musicians playing off of score or cues in the middle. There is also the other dimensions – the physical space in the way it sounds, the code's relationships to the physical attacks and visual look. The installation is really a part of the environment. I think of them controlling the vertical space. It pulls the musicians upright, while the piece pushes both ways at once. It is interesting to talk about the technique, but often I find myself deflecting answers, trying to give away as little as possible, because we have a tendency, maybe because of the lack of language in describing sound, to get immediately into technique. Even music that is supposed to be about sound and material is talked about in terms of technique. I'm not sure what the answer is, I do it all the time, but this is a real problem to my mind. I wonder what would happen if everyone stopped talking about technical detail and started talking about the experience of somebody doing something. I don't think of what I'm doing as trying to go after something exactly human. It's not really human or in-human, it really is just a piece that shows what it shows. That's pretty vague, but it's a sound first, piece first idea. I think it gets a little complicated. It reveals how powerful and basic acoustic sound and human movement is, when the machine rhythm becomes so blatant. At the same time on the receiving end, I'm not sure it matters if you let that go. It is what it is.
Blatant; that's it, exactly. The balance between blatant and cloudy; digital vs. acoustic/actual. They're both just avenues, different sounds. You're making them play nice together. Or even just step on each others toes.
Yeah. They become a unit. It's collision.
Part three of this interview will be available on Wednesday. In the meantime, you would be well advised to do some exploring. Keith Fullerton Whitman is performing Friday at 285 Kent with Pete Swanson and others and Eli Keszler is part of a group show and installation at Eyebeam in Manhattan on June 7. Both have other tour dates on their web sites as well.