April 1, 2013
One of my favorite memories of listening to Miles Davis’s Bitches Brew is reading the stream-of-consciousness liner notes by Ralph J. Gleason. They’re very much of the era, with their run-on sentences, digs at the man, and confidence in the incipient unfolding of some glorious electric new age, but to me the first paragraph still stands as a timeless description of what I love in a lot of music.
… so much flashes through my mind when i hear the tapes of this album that if i could i would write a novel about it full of life and scenes and people and blood and sweat and love.
Ecology and narrative: those are the qualities in Bitches Brew that left me awestruck. The sensation that you’ve been sucked into a wormhole and deposited into an alien place with half-familiar beings who move about their lives–that impressed me and seemed so much grander than just expressing emotions. (more…)
January 1, 2012
At the end of 2010, Jazari was a four-piece, acoustic robo-band that had just welcomed the hi-hat machine into the fold. One year later, the group boasts snare and kick machines, an acoustic wobble bot, vocal processing with the Android app I released last January, lots of looping controls, digital effects on the drum sounds, and a smattering of synthesizers. All these toys make practicing a lot of fun, but it’s time to get these sounds in front of people. Step one in that process is the original mini mix above. Step two involves a lot of heavy lifting and driving, so that’s going to wait for warmer weather, but enjoy step one! Share widely, and grab the free download.
June 7, 2011
Some people think that the fossil record offers solid proof that humans evolved from apes, and I’ll admit that with the right diet and some electrolysis, Lucy could look halfway decent. But if you really want to clinch the case, read the comment threads on a gadget blog. Any post that compares an egoDevice to a Botroid will spark chest-thumping tribal warfare that would make Jane Goodall blanch. One suspects that our tech media overlords know exactly what they’re doing when they throw side-by-side feature comparisons to the howling commenter troops, and while I want to avert my eyes, I think I could learn something from them. If my career ever needs a booster shot of maximal controversy, I’m going to publish a cartoon of the Prophet Muhammad using a Motorola Xoom.
With that preamble out of the way (call me Jefferson), I’m going to make a few value judgments. When I first began toying with the idea of turning the algorithms I use in my music into apps, I started teaching myself iOS because the iPhone was and remains a more profitable platform for app developers than Android, although the gap is closing. I bought books, watched youtube tutorials, and experimented with example code that I ran on my own iThing. After a couple of months, I gave up. Part of the problem was unfamiliarity with objective C, the language used to code for iOS, but the other problem, which seemed less tractable and more discouraging, was a patronizing and somewhat authoritarian attitude embedded in the way the iOS development tools control the process of creating an app. These tools and the pedagogical materials that explain them almost mandate certain design patterns that structure how applications are put together. These patterns make sense for a lot of apps, I’m sure, but they didn’t make sense for my app. I knew how I wanted to structure my app, and trying to contort it into one of Apple’s design templates appeared unnatural and frustrating, so I began looking at Android as an alternative.
Getting started with Android was easy. There was some new terminology to learn, and there were rules to follow, but I felt that Android struck the right balance between preordained structure and flexibility. Equipped with a flexible development environment, I dove into teaching myself the basics of UI design and Android audio programming. In short order I had a test app that would simply take audio input from the microphone and play it back out the speaker or headphones in real time.
The disappointment began when I pressed Play and started speaking. “Test one, test” went into the phone, kicked off its shoes, had a bite to eat, checked the sports page, and ambled out of the speaker about 250 milliseconds after arriving. (more…)
January 2, 2011
I’ve just released my first Android app, VOLOCO, to the Android Market. The app lets users apply a variety of effects, including automatic tuning, pitch-shifting, and vocoding to speech sounds in real time. One particular effect, voice-controlled vocoding, allows the user to control the pitch of a synthesizer tone with his or her voice, and then vocodes the synthesizer and speech signals. Typically, vocoding requires playing a synth with a keyboard while singing. Voice-controlled vocoding is an easier and more expressive way to achieve the robot voice we’ve come to know and love from Kraftwerk and an array of pop singers over the decades. And as far as I know, this is the first software implementation of voice-controlled vocoding. There is a hardware device called HardTune that will do the same for $300. Voloco is ad-supported and free.
I would be grateful for any feedback readers could offer about Voloco–what you like, what you don’t, what features you’d like to see in future versions, and of course, bug reports. The app is computationally intensive, and requires Android 2.2, aka Froyo. If you like the app, please share the robot love in a review on the Android Market. If something goes wrong, email me.
A few tips and instructions:
- You need to use headphones or an external speaker. (Voloco won’t output sound through a phone’s earpiece). Although using an external speaker is a little extra work because you need to plug a mini-jack to RCA cable into your phone, you can get some cool reverberation effects going with the speaker. Try it if you can!
- In the touchscreen-control modes, you switch scale and key via the Menu.
- Recording is also controlled by via the Menu.
- The mic on the headphones that comes with most devices is pretty bad. You’ll get better results by using a pair of normal headphones (or an external speaker) and the mic on the phone.
- Keep the mic close to your mouth but avoid breathing directly on it, which will cause distortion.
One topic deserves special treatment: latency. The Android operating system imposes a ridiculous amount of latency on a real-time audio processing–about 250 milliseconds on my Nexus One. On top of this Voloco adds about 23 msecs. The delay makes real-time performance difficult, to say the least. If you want to automatically tune a vocal performance, I would suggest not listening to yourself while you sing. Alternatively, you can work with the delay by rhythmically syncing your performance to it, or you could just go for low, growling, vocoded washes of sound, which are fun. Google claims that latency improvements are forthcoming in Gingerbread, but it’s uncertain when they’ll arrive.
What does this have to do with Jazari? At some point, probably after I’ve resumed serious drinking, I’m going to start using the voice processing algorithms behind Voloco in my live performances. Chances are, you will never hear my unaltered voice singing into a microphone, ever. But if I could control a vocoder with the pitch of my (retuned) voice, and then use DSP and generative algorithms to resample the output and create a densely-woven fugue of robot voices, that’s what I’m going to do.
June 7, 2010
Two new tracks, After LL and M5/M7, are out, and these are the most beat-based I’ve made yet. They are still entirely improvised, with nothing composed ahead of time or sequenced, but I’ve focused more on driving the beat forward with subtle variations. I still like the free-flowing explorations I did earlier–especially Get To The Chopper–but I think I’ve gained a lot by doing less. The new approach lets me build energy over longer stretches of time and set up moments of tension and release. One side-effect of this approach is that the tracks have gotten longer–After LL is over 9 minutes–at the same time that they’ve gotten leaner. Taut, patient, and propulsive is what I was going for.
You can stream both with the player in the right sidebar. If you’d like to bump them at your next robot dance party or provide musical accompaniment to positive self-talk (“I am a machine! I am a machine!”) while doing your elliptical routine you can download both from iTunes or Amazon.
And if you feel inclined, please rate them or write a review. Both are a significant help and take only a moment to do. Also if you are beat producer and areserious about making a remix, contact me about getting the stems.
March 6, 2010
UPDATE: I’m declaring April 15 the deadline for remixes. I want to have something to look forward to on that otherwise fearful day. Drop of tracks in this drop box or the one in the sidebar. Subscribe to the mailing list to receive announcement of the contest results.
Send me your track
I want to see a veritable zoo of floor-pounding tribal ass-movers grow out of these stems. Before you get to it, here’s my licensing arrangement: You can do whatever you like with your remix, commercial or non-commercial, as long as you provide attribution. These are the condition described in CC license tag below. In exchange, you grant me reciprocal permissions, allowing me to sell your remix via online music stores such as iTunes and Amazon. I will probably make a whopping 20 bucks, but it all helps build more robots.
OK, have at it: http://www.mediafire.com/?zyutdwji2zw
Get To The Chopper! by Jazari is licensed under a Creative Commons Attribution 3.0 United States License.
I decided to offer the stems for Get To The Chopper because the track has a more consistent beat and a stable tempo (144 bpm). If you were dying to remix the audio from my self-titled video, email me, but I think you’ll have more fun with this one. I think the names are pretty self-explanatory, but I’d add that the PZM track is from a piezo mic placed inside the base of the djembe machine. It captures some of the low end missed by the djembe mic near the drum head.
Drop your remixes off in the SoundCloud drop box in the right sidebar. In about a month, I’ll select a few finalists, and I’ll put a poll for readers to vote on the winner.
January 20, 2010
As a grad student, I was interested in what’s called “style modeling” in the business–using computers to generate new music in an older style. Other people have already done this, and a few have done it very well–David Cope and Brad Garton come to mind. Unlike Cope, who aims to (re)produce original music in the older style, I wanted to warp the output of my algorithms and control the generative process live. These are a few of the results of that project:
These tracks are pretty rough around the edges, but each one has some compelling moments. The Slayer song and “Softly” had their pitches remapped to microtonal harmonies, which I think works particularly well with the Slayer. Microtonal metal is an under-explored avenue.