InfiniteAutechreMachine

From brainsik
Jump to navigation Jump to search

From an email from Jesse Rothenberg

it's rather complex, but i'll try to explain it "in a nutshell."

it's a self-referrential generative device. it starts by generating up to 5 instruments (all fm-based, there was a goal to add samples, but that never happened) based around certain rules for the type of instrument we're looking for (bass, bass drum, click, bell, etc). each instrument is assigned to one track. then, based on the instruments in each track, it would generate a rhythm and pitch (although minimal on what one would call melody) from rules that were correlated to eachother. in other words, it would generate the next note for a given instrument based on what set of rules it was currently following for that track as well as the rules for the other tracks and instruments. each instrument had its own sets of rules to follow, and each track influenced the other tracks. then it would generate what were called "sub-tracks" to each track, up to 3 i believe. each sub-track had some dsp effect (decimation, panning, distortion, ringmodulation, etc) that was applied to the track again based on rules that evaluated 1) the rhythm of the main track 2) other effects on the track 3) and in a very minor way effects on other tracks (e.g. to avoid having every single instrument panned hard-left). finally, each track was divided into many patterns (i suppose this belongs up with the rhythm and melody section) of some amount of "ticks" or time-units, probably 64. the patterns were generated by rules described above, and the patterns were sequenced in an order based on 1) what sort of rhythm they contained 2) what instrument was assigned to the track 3) what instruments were assigned to other tracks and 4) what patterns were being played on other tracks.

as you can imagine, it was quite processor intensive :D

the goal was originally to make a completely generative demo with realtime graphical effects timed off the sound -- one of the reasons for the slowdown was having to send triggers for every single effect on the audio tracks -- instruments and tracks being correlated to layers on screen, each layer having a set of effects to choose from, rules to apply, filter effects to apply to the graphical effect on the layer based on the triggering of the dsp audio effect from the track the layer was assigned to, etc.. after the audio system was completed (which was a big chunk of the base system) we added a few graphical effects, but when we ran the whole thing (with some 2 or 3 simple graphical effects, such as perspective warped iterative-animated chaotic attractors, 3d tunnels (hollow cylinders) with a filtering effect for the normal vector of the polygons (even though the polygon itself didn't move -- or moved simply), and the like, it slowed down to disgusting proportions. this was 1999, so i guess the best machine was something like a p2-450 (or at least we were developing it on something like that) we ended up getting a framerate of about 6-10 fps, which is completely unacceptable. so we decided to postpone it until hardware could catch up with our code :)


Last Edit: Sun, 10 Mar 2002 14:13:42 -0800
Revisions: 1