A new member will join the band over the next couple months, which poses a problem: I’ve run out of limbs. Controlling three machines with two hand-held devices is hard enough, and trying to extend this framework of direct control would require either rapidly alternating control of four machines between two controllers or building foot controllers. Foot controllers are out of the question because I think they’re fairly ridiculous. For an organist or drummer, they’re fine, but for a quasi one-man band, simultaneous foot and hand control looks a little clownish.
The other approach of alternating control of the four machines with my existing controllers works well enough, be it leaves me stuck in loop-based music: play something on one instrument, loop it, switch to another and play something, loop that, now switch back, etc. I’m eager to overcome this approach. Live interaction and simultaneous constrained improvisation among all participants, whether human or computer controlled, is what interests me. (Why it interests me more than loop-based music is a topic for another post.) Musical human-machine interaction has been a research topic for at least two decades now but has seen mixed results. Research in the field often produces interesting demonstrations but little in the way of music you might actually enjoy. Among the more notable efforts is Robert Rowe’s Cypher system, which uses a rules-based approach, and Francois Pachet’s Continuator project, which favors a data-driven, machine-learning strategy. I favor the latter approach augmented by the ability to build hooks into the generative system so that I can steer its output.
Machine learning researchers often describe their models as “black boxes” because their inner operations are opaque to observation. (more…)
Filed under: Uncategorized