-
Posts
816 -
Joined
-
Last visited
Content Type
Forums
Events
Store
Video Gallery
Everything posted by AM
-
good approaches, thank you!!
-
LISP... any solution? i want to DIVIDE a seq into sublists -> when it's asc into a list, "rest" into single-listed values - thanx a lot for some help ;;; input (divide* '(14 12 3 13 15 8 4 10 17 2 16 0 1 6 7 5 11 9)) ;;; output => ((14) (12) (3 13 15) (8) (4 10 17) (2 16) (0 1 6 7) (5 11) (9))
-
thanks, torsten! i know hanspeter kyburz (i almost studied with him - and because of him i started working with algorithms) and i know some of his works (CELLS / PARTS / ....). if you have the article of ERES HOLZ as pdf, would be nice (the link seems to be dead)... only some short thoughts - not on an scientific research level isn't it - in general - a question about complexity and information? and complexity/information changes when you are changing/crossing the MEDIA. that means: from algorithms/code/mathematics to sound/music or visuals... also the complexity itself changes, it's like transforming... "the complexities" are different - the manifestation of it. so it's quite simple - and you feel so important and intelligent and "arty" - when you TAKE a really advanced mathematical/algorithmic grammar/process as a TOOL, ...but in accoustic perception the result could be quite "noise", because the musical complexity is different and depends - in my opinion - a lot on "how you map these things", on which musical parameters and objects, in which dimensions... of course, it's also interesting to use such things - like algorithms - in a way of "creative misunderstanding" (as i think ferneyhough once mentioned in different context), but we should be aware of the GAPS: algorithmic complexity - effective complexity. what do you think? greetings andré HEINZ VON FöRSTER had some really good thoughts on such things - in an abstract way... here is an article.... disorder:order.pdf and an article from DAVID GALANTER... Galanter_2003_What is Generative Art Complexity theory as a context for art theory.pdf
-
Triggering event - anonymous functions (like in Javascript)
AM replied to Frederic's topic in Support & Troubleshooting
thanks! a. p.s. ... that's why i have some other solutions for my needs... -
thank you, torsten! (at the moment, I'm thinking about what the GAP is between visual visual l-systems (images, you see the whole "gestalt") and acoustic ones (which more has to do with the PROCESS in time, musical grammar...). the handling / perception / .... is a completely different one)
-
Triggering event - anonymous functions (like in Javascript)
AM replied to Frederic's topic in Support & Troubleshooting
my colleagues from the computermusic laboratory advised me not to steer external things via internal SLEEP, but also to control the TIME / delay via OSC. perhaps also has to do with the fact that for me different things must be controlled in parallel and synchronized. i don't know... -
Triggering event - anonymous functions (like in Javascript)
AM replied to Frederic's topic in Support & Troubleshooting
the NUMBER is precise, but ... ...but the great thing is, you are very flexible (and open) in OPMO/lisp... and you can trigger also other software/externals from OPMO/lisp via OSC, i LOVE IT -
Triggering event - anonymous functions (like in Javascript)
AM replied to Frederic's topic in Support & Troubleshooting
but SLEEP is not very precise - would be nic to have in OPMO a real precise trigger, like with OSC (for external players...) -
it's not necessary for me, just an idea
-
;;; a little extension for lsystems, i needed all generations, not only the final one. i think for in-time-processes it's more interesting, because you will hear/see the way of "growing/developing" ;;; perhaps JANUSZ could extended the original OPMO-function. keep attention about stack-overflow if you have LARGE DEPth :-) ;;; function (defun all-gen-lsystem (ls &key depth ) (loop repeat (1+ depth) for i from 0 to depth collect (rewrite-lsystem ls :depth i))) ;;; setup (defclass sieve_1 (l-system) ((axiom :initform '(1)) (depth :initform 10))) (defmethod l-productions ((ls sieve_1)) (choose-production ls (1 (--> 2 1)) (2 (--> 4)) (4 (--> 2 6)) (6 (--> 1)))) ;;; example ; new => all gen (all-gen-lsystem 'sieve_1 :depth 3) => ((1) (2 1) (4 2 1) (2 6 4 2 1)) ; original => only last gen (rewrite-lsystem 'sieve_1 :depth 3) => (2 6 4 2 1)
-
because in such a fast "basic tempo" they were too close to "17" and "23" i think in slower tempo it makes sense to work with numbers they are close/closer to each other. then the perception compares the tempi, when the GAPS are to large it's only "SLOW/FAST".... depends on your ideas... THANKS FOR THE HINT, i will look at it!! andré
-
polytemporal fall - algorithmic study [with tempo relations 3:5:7:11:17:23:29] https://soundcloud.com/andr-meier-1/algorithmic-study-polytemporal-fall this is a small example: i coordinate and play MIDIs (in this case this are simple scales) in OPMO via OSC -> maxmsp_player. you can here how precise you can coordinate simple [but polytemporal] scales => all the MIDIs coincide (?) - 30000ms after the evaluation of the code - in an unsiono pitch!!! in that way you can coordinate different scores/midi's (which have individual tempos (!)) very precise. with OSC/maxmsp_player you can change/manipulate the tempi of the midi's (directly from OPMO) so you can be variable with pre-produced midis (perhaps produced in OPMO)... greetings a.
-
dear janusz when will be the next update (with midi-to-omn)? and it would be also nice to GET from midi: bpm duration (of the whole midi, as a result of tempo/bpm and the lengths/rests) ...if possible... thanx a.
-
"What you are doing Andre is quite bad and unsafe" very friendly, your answer ...and i knew that, but i didn't found some hints (jn the tutorials) how to extract OMN from def-score. of course it is not good, but it took me only 2 minutes to briefly the problem for my specific application. i have not written any official opusmodus-function (which can do anything) but a simple solution which helps me.
-
oh, this was simple here is a small program, it works... (defun get-pitch-from-midi (midipath) (loop for i in (flatten (compile-score (midi-to-score midipath) :output :score)) when (or (chordp i) (pitchp i)) collect i)) (get-pitch-from-midi "path/filename") also with length?! (defun get-length-from-midi (midipath) (loop for i in (flatten (compile-score (midi-to-score midipath) :output :score)) when (and (lengthp i) (not (integerp i))) collect i)) (get-length-from-midi "path/filename") but will not work with more then one voice / its not necessary for MY needs, so i coded only this simple solution - perhaps janusz will do it?
-
dear all is there a quick way to import (or filter) only pitches and chords from a midi-file? if i only use these (nothing else from midi/xml) thanx for a hint andré
-
for me usage it has to be "by OSC". i have to be able to control two programs (conducting/display + e-player/samples) simultaneously and in a coordinated way. so i can send DATA for both software-applications well coordinated...
-
dear all here is a setup for playing midi-files/scores in polytempi / follow the instructions and have fun! just ask if you have some questions... greetings andré - personally i will use it for exact sample/e-player perfomance with my pieces which are working with "Technology-Assisted Conducting" http://polytempo.zhdk.ch ... in future i will do it all directly from OPMO or lisp - "live score generating" + polytempo-conducting + e-player) - i have already done this with my piece MODULAR FORM but not all controlled by LISP/OPMO, so next step is doing it all in OPMO/LISP ...some explanations about the piece.... andré meier - trompete | komposition - modular form WWW.ANDREMEIER.ORG greetings andré Polytempo - Wikipedia EN.WIKIPEDIA.ORG ;;; POLYTEMPO-PLAY ;;; with a MAX-patch (from my friend thomas peter) and some OSC-send i can play the same/different midis (up to 30) ;;; in different tempos in parallel - any combination, with precise coordination ;;; also possible: change global velocity (means: change velocity inside midi) ;;; time-delay (start) in ms ;;; 1) OSC-send functions: (defparameter *out-socket* (make-socket :type :datagram)) (defparameter *remote-host* "127.0.0.1") (defparameter *remote-port* 7500) (defun udpsend (&rest args) (let ((message (apply' osc::encode-message args))) (send-to *out-socket* message (length message) :remote-host *remote-host* :remote-port *remote-port*))) ;;; 2) a) put the MAX-player-folder on desktop ;;; b) start midiplayer.maxpat ;;; c) midiplayer: define your output source in [midiout@name "from MAX 1"] ;;; d) the MIDIS must be placed in the midi-folder (inside MAX-player-folder) ;;; 3) generate SCORE (here a nonsense example) (setf omn (make-omn :pitch (setf pitches (filter-repeat 1 (flatten (gen-sort (rnd-air :type :pitch :seed 45) :step 5 :sort '> :seed 123)))) :length (gen-length '(1) 1/32) :velocity (pitch-to-velocity 'p 'mf pitches :type :float) :span :pitch)) (def-score sorted-whitenoise (:title "sorted-whitenoise" :key-signature 'atonal :time-signature '(4 4) :tempo 60 :layout (grand-layout 'inst)) (inst :omn omn :port 0 :channel 1 :sound 'gm :program 'acoustic-grand-piano)) ;;; 4) COMPILE that score into your Max-Player/midi-folder => PATH+NAME!!! (compile-score 'sorted-whitenoise :file "your-path/sorted-whitenoise") ;;; 5) play it by evaluate UPSEND -> some examples ;;; /eplayer / midi-name / tempo-factor / velocity factor / time-delay in ms (udpsend "/eplayer" "sorted-whitenoise" 1.0 0.5 0) ;; originaltempo, velocity 0.5 (udpsend "/eplayer" "sorted-whitenoise" 2.3 1.0 0) ;; (* tempo 2.3) etc... (udpsend "/eplayer" "sorted-whitenoise" 0.375 1.0 2000) ;; (* tempo 0.375 with startdelay 2000ms) (udpsend "/eplayer" "stop") ; you can stop with that ;;; a tempo-relations => 23:17:13:9:3:2 -> a complex example with time-delays ;;; also possible with every and different midis you like (progn (udpsend "/eplayer" "sorted-whitenoise" 2.3 1.0 0) (udpsend "/eplayer" "sorted-whitenoise" 0.3 0.8 0) (udpsend "/eplayer" "sorted-whitenoise" 0.2 0.4 0) (udpsend "/eplayer" "sorted-whitenoise" 1.3 1.0 10000) (udpsend "/eplayer" "sorted-whitenoise" 1.7 0.9 16000) (udpsend "/eplayer" "sorted-whitenoise" 0.9 0.7 20000)) (udpsend "/eplayer" "stop") ; you can stop with that Max_Player_19-08-23.zip example.aiff goldberg_13_11.aiff example_11_7_5_3_2.aiff
-
too much code and too complicated to post - I do not have the time to write a manual. it's a "machine" that creates multiple "brownian bridges" combined with "pitch-contour" and "add-rnd-dust". it's an all-in-ONE tool/machine/bot... I'm interested in repetition/difference in other contexts than traditional ones; but "brownian bridges" then resemble ornaments. when the sequences are short - brownian bridges are "rnd-processes" between 2 fixed points - then you will keep ornamental sequences between this 2 points/pitches... (I did not work with a score, just coding and listening - it's only sketching/testing, not composing. and all the examples are "rnd-generated"/not-composed by the machine, you could produce more and more...) some links: Brownian bridge - Wikipedia EN.WIKIPEDIA.ORG -> in OPMO
-
another example/experiment - you hear 4 times the same "brownian-phrase" but mapped on different tonalities... untitled.aiff a) diatonic/major b) blues-heptatonic c) chromatic d) messiaen-modus greetings andré
-
thanx, but just a project/coding sketch - nothing serious
-
very nice, janusz!!