I have made no attempt yet at UI design or even any kind of prettiness, just getting the functionality squared away. This borrows heavily from a blog post by Steven Wittens for how to create and manipulate the audio samples.
I am writing the audio directly to the <audio> tags as base64-encoded data URIs (audio samples wrapped in a wave file header), so I can generate them on the client side taking into consideration user choices about frequency, panning, volume, filter sweep to a second frequency, duration, and adsr envelope. There is also a waveform display, using the <canvas> element, that allows you to zoom in and out. As I am about to teach Introduction to Audio at COFA in a week, I had been thinking of this as an educational tool that is easily accessible to anyone on the web. I’m hoping it can be optimized to work on mobile browsers as well. Of course, by the time that happens this will probably be out and make this obsolete before it sees much light of day.
You can name the samples you create, and play them back, yay…boring unless you’re a synth geek! So you can also choose a sound from a dropdown and then click in the dark grey box and create a ball that plays back your sound when it hits something. You can make as many sounds as you want, and attach them to different balls. The balls also absorb other balls and their sound (when I get it working properly) when they collide with smaller ones.
This is all very rough at this point, the UI is god-awful, but will be worked and prettified soon.
Until then have fun.
Note: there is definite bugginess still, let’s say this is alpha stage.
While I was looking around last night, I came across Pulsate, which would have been very inspiring had I seen it before I got this far. I had played with Tonematrix previously. Everything Andre does is pretty sweet (even if it is Flash ;-))