The automatic generation of audio has been explored both as artifact in itself and for the accompaniment of existing animations. The most recent work automatically generates soundtracks for input animation based on existing animation soundtrack. This technique can greatly simplify the production of soundtracks in computer animation and video by re-targeting existing soundtracks. A segment of source audio is used to train a statistical model which is then used to generate variants of the original audio to fit particular constraints. These constraints can either be specified explicitly by the user in the form of large-scale properties of the sound texture, or determined automatically and semi-automatically by matching similar motion events in a source animation to those in the target animation. Earlier work explored connections between music and mathematics. MWSCCS is a concurrent stochastic process algebra for musical modeling and generative composition. This allows the creation music directly from mathematical processes.




Publications

Marc Cardle, Stephen Brooks, Ziv Bar-Joseph and Peter Robinson. Sound-by-Numbers: Motion-Driven Sound Synthesis. Proceedings of the ACM SIGGRAPH Symposium on Computer Animation, San Diego, July 2003 (PDF).

Marc Cardle, Stephen Brooks and Peter Robinson. Audio and User Directed Sound Synthesis. Proceedings of the International Computer Music Conference (ICMC 2003), Singapore, October 2003 (PDF).

Stephen Brooks and Brian J Ross. Automated Composition from Computer Models of Biological Behavior. Leonardo Music Journal, Volume 6, pp. 27-31, 1996.




Sample Video

  • You can find players/codecs for DivX @ www.divx.com
  • File size: 12 MB.




Return to main page.