Jump to content


  • Content Count

  • Joined

  • Last visited

About hujairi

  • Rank
    Advance Member

Contact Methods

Profile Information

  • Gender
  • Location
    Seoul, South Korea

Recent Profile Visitors

2,766 profile views
  1. Hi, Forgive me for these simple questions, but I'm curious to know what's the best way to display all the libraries I have? For example, I'm now trying to build new libraries that includes ambituses of custom-built instruments I will use, rhythmic patterns I am developing, and modes/scales I will be using. I have the following questions: 1. I'm trying to understand the difference between library, def-library, and create-library. I am particularly interested in better understanding def-library and create-library. Are the two functions very different? 2. If I want to see all my rhythm libraries, for example, how can I see what I've already got on my computer? Many thanks.
  2. Very sad to hear this. He was very inspiring and supportive. I learned so much from him.
  3. Just a small question, and I am not entirely sure it'll be a big help but did you try quantizing the MIDI in a DAW first? I've had a similar problem once before and just running through a DAW and quantizing everything for some reason cleaned up the MIDI for me. I hope this helps.
  4. This is really amazing. Looking forward to experiment with this..!
  5. Dear Janusz, How about using the following four parameters: Length (according to range): wind direction (degrees): floating points, between 0 and 360.0 Dynamics: wind speed (m/s): floating points Pitch: temperature (celcius): floating points Chance of randomly modulating length (according to range of Humidity): Humidity (percentage): floating points Do you think this can be done? I've attached a screenshot of what I'm currently working with to receive weather API data. Thanks.
  6. Hello, I am working on a sound installation for two voices (using Soniccouture's Geosonics VST plugin and a vibraphone VST plugin running through a computer). The source of the data is taken live from the internet and is actually API weather data from a city in South Korea. The piece will play for 2 hours every night between early December and early March. The trouble I am having is how to transform the different fragments of weather data into data I could use within Opusmodus. Here's the type of data that I get according to each parameter (the options are either floating points, integers, or T/F). All of this is on my computer as OSC signals. 1. wind direction: floating points, between 0 and 360.0 2. wind speed: floating points 3. sky code (i.e., clear, cloudy, rain, snow): integers 4. precipitation type: integers (codes for each type of precipitation) 5. temperature (celcius): floating points 6. temperature minimum: floating points 7. temperature maximum: floating points 8. humidity: floating points (representing percentage, example 31.1% = 31.1) 9. pressure: float points (time of writing this, pressure of the place I was looking at was 1008.9 while here in Seoul, the pressure is 922.8) 10. lightning: True or False option My questions are: - How can I receive OSC signals within Opusmodus? Is there a function that allows me to do this? - I've never used the Live Coding function in Opusmodus although I've got a basic knowledge of live coding within other environments such as SuperCollider, Sonic Pi, and Max/MSP. I would like to have the data control the live coding functions modulate the music as it plays live. I would really, really appreciate any advice I may get on this. I don't mean to have the community here 'do everything' for me, but I'd appreciate any direction I may get in successfully accomplishing this project. Thanks very much and sorry for any trouble, Hasan
  7. hujairi


  8. Fantastic suggestion. I'd love to see some focus on those same matters, too. I had asked in the past here on the forum about the Infinity Series and Spectral Techniques. Thanks for bringing this up.
  9. Thank you very, very much for this. I hope to create something new with Opusmodus very soon using this approach. Really looking forward to it.
  10. Hello, I am trying to find different ways of importing partials/frames data into Opusmodus. I am currently trying to use spectral analysis to analyzes certain music/audio phenomena I haven't really seen analyzed before to adapt into my own composition process. The sounds are generally archival or things I have synthesized on the computer through software myself (in .wav format), and are usually between 1 - 5 seconds in length. Is this a good length of time to analyze the spectral data of sounds? I just wanted to understand the following information about spectral tools within Opusmodus. I really would appreciate it if anyone could help me out. 1. Is it only Spear that we can use for spectral analysis? Is something like Audiosculpt also useable? 2. Looking at Spear, when we explore the data, which of the following export formats do we use: 1. Text - Resampled Frames, 2. Text - Partials, 3. SDIF 1 TRC - Resampled Frames, 4. SDIF 1 TRC - Exact Interpolated, or 5. SDIF RBEP? 3. Once I have the data in the correct format, I think I want to create a library of the datasets I important so that I can reinterpret them in the future. If I want to use the "get-tuning" function, for example, and then mapping it across certain instruments within my planned orchestration. What would be an effective way of doing this? One of my main challenges is finding a way to keep all my data organized every time I want to use Opusmodus for a new project. I still find myself using Opusmodus in little blocks as I compose, and I hope to eventually get to the point in which I can compose entire pieces using only Opusmodus. I really appreciate your help.
  11. I haven't yet had the chance to test midi controller integration with Opusmodus+Ableton, but I'd be curious to learn more about your findings...!
  12. excellent..! thanks for this.
  • Create New...