Arbitrary Math etc
  • The majority of the Audulus UI layer is Lua actually :-). Its a wonderful language!

    With using Lua for DSP, I'm mainly concerned about performance. In your example, would Input1, Input2 and Output1 be scalar values (floats) or vectors of samples?

    - Taylor
  • To the general audience: skip this if the innards of Lua do not interest you :)

    Here lieth the beauty that is Lua: you do not have Lua do the math. Instead, you implement userdata types for each and employ the metatable __index and __newindex methods on the C side for them. You also have to instantiate the type tables from the C side (a Lua safety intent - although, as it turns out, once instantiated you can add methods to the metatable from Lua! There is a way to make the metatable read-only once instantiated though). You use lua_newuserdata to instantiate a type, luaL_setmetatable to set its methods, and luaL_checkudata to validate a Lua ref to an instance before executing the intercept methods. Set this all up per type something like this (note: single thread, single Lua state here, modify the initialized flag logic to support multiple states, also you should probably check the return from luaL_setfuncs before discarding it; Lua will dereference the table left TOS after the return :):

    static const luaL_Reg schema_metas[] = {
    { "__index", schema__index },
    { "__newindex", schema__newindex },
    // add other operators, e.g. "__add" for '+', here
    { NULL, NULL }
    static void lua_schema_init(lua_State *L)
    if (!__schema_initialized)
    luaL_newmetatable(L, "schema_value");
    luaL_setfuncs(L, schema_metas, 0);
    lua_pop(L, 1); // drops the error return

    Now you can be polymorphic on the inbound connections. Outbound may take a little more local logic. A tricky bit that will depend on your architecture is handling the step-wise setting of values you need in parallel, e.g. in mixing sample vectors together. Some sort of colored Petri net is useful here for the rendezvous management.

    btw, this little example is extracted from the core of my Lua module that takes a (moderately cleaned) C header with struct definitions and makes userdata instances that map so they look like a normal table (aTable.var1 = aTable2.var3 etc) but execute at C speeds. I use this for processing my connection pipe messages so the data stays in C struct format, no translating to Lua tables required. Nor Lua gc. Very handy. The header file is also used by the method implementation code so there are no double entry error opportunities.

  • Very cool. I think what you're describing is fairly similar to the way I wrap my C++ stuff out to Lua. I did all this template fanciness, which took far too long to implement, but it works.

    So I could create a C++ class for signal vectors that would be wrapped out to Lua. There would be relatively little overhead in the dispatch to the vectorized math operations, so that would be fine performance-wise.

    But I'm concerned about how this would interact with the planned single-sample feedback loops which would require the math to be executed once per sample. In that case, using the standard Lua implementation would result in significantly reduced performance. LuaJIT would get a bit better, but jitting isn't allowed on iOS, so LuaJIT falls back to its interpreter. I suppose its a question of whether LuaJITs performance is good enough, which is just something I should test.

    By the way, I looked up what a Petri Net is and my brain started hurting :-\. What would it be used for in exactly in this case? And what's rendezvous management?

    - Taylor
  • Yes!

    wrt sample by sample feedback loops (filters? reverb?): I'm guessing you want to be able to model complex algorithms inside a patch that involves multiple graphical objects. This is a nice problem (as in "may you live in interesting times" nice).

    Note: I ramble. This is a topic that has occupied me for decades, so bear with me or tell me to shut the f^ up. Seriously. I won't be offended.

    I think there are solutions, but my approach (being the ancient fart that I am) may seem a little creaky to you but I promise, run time is stupendous: using the nice string and hash functions in Lua, add in a FORTH kernel, and compile the vector and recursive (fractal?) expressions to FORTH. This produces essentially a list of lists of function calls against a virtual dual stack PDA computing model, so you can use minimal preamble in the lowest level native C functions (void arguments and returns, there's a #pragma for that). No need to expose it to the users.

    Then, since I expect your audio actually arrives in clumps but you see processing it being a pass around all objects per sample, use these little code strings in the loop. You could also render the patch topology transfer functions this way. So, use Lua to orchestrate the rendering model in FORTH which orchestrates the evaluation steps in C fragments. Make free use of precomputing everything possible. And voila! user defined yet optimal and fast.

    Theoretical Diversions:

    If you haven't already, it may be very informative to delve into how verilog simulators work. They have to solve pretty much this exact problem.

    As to Petri nets: like many such things in CS (FFT for example), theory is horribly opaque but implementation can be so trivial as to resemble magic. Essentially, you pass around the connections little structs that wrap the data. You label these with an incrementing data set code (this is the "color") and you only trigger reading and processing all the inputs when they are all the same color. You then remove them from the pipes and wait for the next color completion.

    That is essentially what the rendezvous bit means - thay all have to arrive together before you run the next step of the algorithm. The color defines what "together" means. Otherwise you can get nasty effects - and broken DSP is always very nasty as you know :) - like An + Bn coming out twice as An + Bn+1, An+1 + Bn+1 etc.

    PS: if FORTH seems archaic to you, it may be of interest that FORTH ran most of the satellites and telescopes in the early days (1976 on in the COSMAC 1802 mp), and is still in use today in a dialect named "PostScript". And many scientific instruments. The language evolved out of a need for time efficient and space conscious solutions to resource limited systems in the days of tiny micros. It is still in use today, for much the same reasons: our processors may be blindingly fast in comparison, but - as you are concerned still - what we want to do with them still runs up against resource limits but we want something symbolically hgher level than assembler to implement them with. It is however one of the sharpest tools in the toolkit (no handle, all blade), exceeded in foot-slicing capability only by Supervisory mode assembly code. And you can persuade it to do that too if you dare. Do NOT expose to users!!!

    I have a public domain FORTH interpreter written in ANSI C, half by John Walker at Autodesk and half by myself, that I have used in everything from a 80186 (fitted a build into single 64K segments!) on up to embedding it in my current big projects simply because it makes the most powerful scriptable debugger* for real time systems I have ever encountered. In fact, I latched on to Lua so rapidly once discovered because it shares many charateristics with FORTH, other than not being RPN syntax. Lua is stack oriented just like FORTH is, but just takes a bit longer to invoke the C services. Both Lua and my FORTH are also implemented in ANSI C rather than native assembler so they are eminently portable. In fact, Google "Lua and FORTH" and you'll fnd I am not alone in using them together..

    * - for Real Time stuff, the conventional breakpoint debuggers can only really offer you a post-mortem after things have broken badly enough to recognize a breakpoint condition, this is often seconds later, after thousands of passes through the breakpoint area. Instead, I set things up so I can watch them actually behave-ing. Think Doc Edgerton's Stroboscope. Greatly reduces the fime it takes to home in on where exactly things are going wrong. Removes the need for stop, restart, set up the error conditions, wait for breakpoint agan; also obviates the need for edit - compile - run cycles changing printfs. And of course, being me, I mapped all my data structure headers into FORTH with a script so I can watch data strucures while the sytem is running. Everything I do is event driven and cooperatively multi-tasking too. Another thing I like about Lua - coroutines rock!

    By now, you would not be surprised to know that I can call FORTH from Lua and Lua from FORTH in my projects. And of course both go to and from C/C++/ObjC. I guess I believe in the best tool for the job vs one hammer to rule all nails, and in the black box bind them.
  • Perhaps the most ccessible description I have fond is byy the esteemed mathematician Mark Chu-Carroll. See here:

  • @dwarman Hi, fellow Forther! I haven't programmed for a couple of decades, but back in the day Forth was my weapon of choice.

    For those who are saying "Huh? Forth?" here's a very brief introduction focused on the one-to-one mapping between Forth code and visual programming (i.e. Audulus)

    Apologies for the hijack, you may now return to on-topic discussion. :-)

    (Sorry for the occasionally rough HTML on the site - I'm amazed it still renders at all after such a long time. Also the email address is dead, so don't use it.)
  • @gordon - hi! And thanks.

    Does nobody sleep any more?

    For those who want to play with a visual FORTH (and Perl) system and have OS X 10.6.8 handy, I might be able to get you a copy of our VNOS. Graphical programming language, data flow connectivity like Audulus, widget behavior over-rideable with scripting code snippets. Sadly I cannot develop it any more, but I still use it.

    (I'll provide URL on specific requests, not openly here)

    Just happens to also include a set of MIDI processing widgets as well. And a VNOS MIDI app I call LiveWire - a looper and mapper for my KX-5. Later OS version will run but without the Perl extension. Perl 5 embedded runtime has hard paths in the binary that changed with each OS and I have yet to figure out the rebuild incantation for Lion.

    And with that perhaps the hijack could continue on another thread? If you want to play you'll need some help from me. And I do not have a forum of my own.
  • dwarman, very interesting stuff! I installed gforth just to mess around a little.

    I'm assuming forth gets its speed because it doesn't do any type checking, right? So how does the forth interpreter protect against trying to apply a function to a value of the wrong type? Is it just "undefined behavior"?

    The goal with the single-sample feedback loops is to come closer to how an analog modular synth would work. Currently, Audulus executes a list of DSP objects each rendering quantum (typically 128 samples per quantum). Since jitting isn't allowed on iOS, it would seem the best approach for single-sample feedback is to reduce the dispatch overhead to each of those objects down to a single function call (and probably use the fastcall attribute), much like a threaded interpreter. Do you think using forth would yield a faster implementation?

    - Taylor
  • Well, like I said, FORTH is the sharpest tool in the box. No type checking. Possibly some pointer validation, but don't bet on it. Think of it as assembler. Which is one of the main reasons I consider it totally inappopriate for large scale program structure / infrastructure, but ideal for code snippet styles. You have to rely on the upper level code being very certain it is providing valid data. Which may be a deal-killer for many these days. OTOH, not uncommon to see #asm { ... } blobs in C DSP code, where the compiler cannot take advantage of special hardware quirks / features of the underlying chip architecture. They are equally un-type-checked.

    But yes, that's one reason it can be so fast. The trade-off is testing time. One tends to develop a pro-active coding style too that helps correctness. For example, FORTH words do not intrinsically have any knowledge of arguments or return values on the stack, and a common bug is to miss a swap or a push or a pop or two. My answer to that is to use the stack comment ( a1 a2 -- r1 r2 ), which is merely a text convention in standard FORTH, to overlay named local variables on the stack a1, a2, r1, r2, and clean up the stack so it has the right count of values on return, including FORTH exception unwindings. And I keep the comments attached to the vocabulary entries so I can search them for API help. Also can define local variables for internal work. Also nested word definitions. Also auto-managed dynamically allocated variables that get properly cleaned up when they go out of scope, primarily for auto-sized string data types.

    [:] gimme ( arg1 arg2 -- result : a simple example of local variables returns 4 * (arg1 + arg2) )
    [:] nestedfunc ( arg1 -- result : example shows locals really are locals )
    arg1 @ result !
    local sum
    arg2 @ nestedfunc arg1 @ + sum !
    4 sum @ * result !

    [:] locals ( arg -- result : another example )
    arg @ result !

    4> help gimme
    gimme: arg1 arg2 -- result : a simple example of local variables returns 4 * (arg1 + arg2)
    4> help example
    locals: arg -- result : another example
    gimme: arg1 arg2 -- result : a simple example of local variables returns 4 * (arg1 + arg2)
    4> 3 nwords
    locals: arg -- result : another example
    gimme: arg1 arg2 -- result : a simple example of local variables returns 4 * (arg1 + arg2)

    The normal colon definitions are not affected, hence the new compiler words [:] ... [;]. Notice that nestedfunc and its variables do not show up in nwords or help - they are pinched off inside the vocabulary definition, also go out of scope outside its own definition so which arg1 is visible is properly managed.

    You won't find this in gforth. ANSI FORTH has something kinda like this but IMHO mine is better. Also preceeded theirs but I did not know they were working on it. And I did know the people, just never thought to mention it.

    It's not enforced, and does slow things down a little bit (extra @ and ! ops, but those are really cheap), but really helps code clarity and correctness. You can still do stack dances if you want to get minimal and obfuscated, but ... like my old compatriot Tony Hoare says, get it right, instrument it, then get it fast. Or, avoid premature optimisation (Knuth's version? Dikstra? I forget).

    A philosophical aside: I hate edit - compile - load - run - observe - ponder cognitive loops. So VNOS - like Audulus (you obviously get this) - is an always live, behavor changes as you interact with it, system. Cognitive loop contracts to edit - observe - ponder. I call it exploratory programming. We won some IBM autonomous computing awards for it. Scriptable compute nodes is a big part of this. It let us deliver some things in days that would take months if done conventionally.

    It takes little time to grok FORTH but much more to become fluent and production-ready (like anything else), might be better to keep that for times when you need to let your back-brain do its thing and keep out of its way. If you enjoy gforth, I could confuse you nicely with a command line version of mine. Just close enough.

    This is a fun chat. I am getting to worry it might also be a bit of a distraction for you (is ADHD contagious???). Wrapping your quanta in Lua, and maybe a simple expression math node, may be (more than) enough for the moment. Will be useful also for algorithmic MIDI (want! want!). Your call.
  • Forth doesn't type check. If you as the programmer see fit to multiply a character by an address and subtract a Boolean, that's your responsibility. Chances are, that sort of trickery will result in a system crash. Forth is fragile like that. It enforces good programming practices and a thorough understanding of the code. Forth has applications in safety critical systems - situations where dodgy code can result in fatalities.

    It is fast because conceptually it is an assembler for a two stack processor ie one which has the usual return stack, but a second data stack rather than a set of data registers. Stack processors are lean and mean, even in emulation.

    Every Forth programmer invents local variables, mostly because they can. ;-) And control structures. And new ways of making stuff. It can be addictive. As a hobbyist I spent a year of evenings writing an extension set of Forth data flow and control flow words to give it "undo" - optimised rewind - so that I could put the test after the branch (take an umbrella if it is going to rain) by judiciously saving the state of the data stack to a third "history" stack, and then applied these structures to a set of non-deterministic string pattern matching words that worked together to form a task specific language within Forth. 10k of source code. Most inscrutable Forth you have ever seen, and Forth can be pretty damn inscrutable at the best of times!

    (If you're curious, here's a version that was tidied up and ANSIfied by another old-time Forther. )
  • That's one of the things about FORTH and forthers: you start with a FORTH distribution, and you have a problem to solve. The workflow results in effectively building out a custom language that describes your problem, and in so doing describes your solution - a methodology I have taken out into the rest of my world with great results. FORTH is a great tool writing language.

    One result of this is that every forther ends up with a variation of FORTH that is personal. Smalltalk has the same workflow and same results, although from the other end of the abstraction spectrum and much more of a heavyweight. Both differ from almost all other scripting languages - Lua included - in that you can actually modify the language, with new semantics and operators, and not merely add function libraries, in the process of developing your solution. The downside of this is that it is difficult at best to share code in teamwork and community settings; most FORTH work is done as a singleton.

    What a digression this is turning out to be. Almost compulsively so, for me. Sorry, Taylor. Your move.
  • Back on the original subject: it just occurred to me that an algebraic expression evaluator would probably run at least as fast as a net of primitive operators and constants, for anything more than 2 operators in the expression.
  • @dwarman, no worries! Forth seems quite interesting... I'll keep reading more about it. Though I think my mind is melting a little trying to understand everything here. By the way, what's a VNOS?

    About your last comment: I would think an algebraic expression evaluator could always run as fast as a net of primitive operations, assuming by "net of primitive operators" you mean the nodes and connections that make up an Audulus patch. The algebraic expression could just be compiled into a network of nodes. Am I missing something?

    @Gordon, your system with undo sounds like backtracking at the language level, no?

    - Taylor
  • You asked for it. Perhaps you did not anticipate the scale of the answer. Essentially VNOS is closely interwoven with my life story. But for what it's worth, and believe it or not briefly, here it is:

    VNOS - Visual Network Object System - is the graphical programming platform I and my partner developed over the course of 1989 - 2007 (a loooong road, also looong story!). It is implemented in ANSI C for portability - we had Mac OS 9, Mac OS x, Windows from 3.1 to XP, Amiga, and Linux versions running by the end. It was a tough nut to crack.

    It started out as VirtualStudio, a way to graphically manage connections between MIDI devices over our Lone Wolf MidiTap network (AES Tech Award nominee 1991). Demonstrating drawing a line on the Mac screen between a keyboard icon and a synth module icon and then playing over it drew a standing ovation that year. MidiTaps were popular with the high end pros, went on several high profile tours (INXS last tour, ELP reunion, ...), and went into several composer studios. Many are still in use, including (of course) my own. I remember the pain of managing my studio before them (which was why we did it in the first place), and frankly do not understand why it is still the only one to do it. I was told by the director at CCRMA recently that the MidiTap is still the standard by which they evaluate MIDI connectivity technology. I occasionally see them on e-Bay but from people who do not know what they have. I feel no need to bid on them. My closet is full of them.

    A MidiTap network is a fiber optic linked collection of 4 port (IN and OUT pairs) nodes, but functionally the network operated as a single large distributed box with a fully soft general connection topology - any combination of channels from any combination of ports on any combination of nodes mapped to the similar fuzzy set of outputs. And all channels and ports alphabetically named and ordered in the 16x2 LCD UI so once named you did not have to remember port cables - or change them, or have to do a linear search through unordered lists to find them - to make or break connections. A truly peer to peer multi-player system, intended for jamming, concert performances, and general hardware sharing in large studios. Yet also applicable to smaller ones - at its height, my home studio had an 8 box (32 ports) MIDI network with 3 computers, now I have downsized and only have 3 boxes. And my Mac and iPad.

    One big feature was that only a single duplex fiber is needed to link pairs of boxes together, at up to 2Km separation, with no possibility of ground loops or electrical transients. So you have boxes up under your keyboards stage front, and one skinny and almost invisible cable to the big rack backstage, and another out to the sound boards and more big racks at the back where the engineers can manage the connectivity. ELP's big rack included 32 Korg M1's! Hard to imagine what the cable run would have looked like without our stuff - in fact, Brian Bell said it would have been impossible to do it without the MidiTaps. Our plan - and we did get to a working prototype - was to include audio in the network traffic when data rates got high enough.

    But for strange political reasons we had to abandon it in1992 and instead retargeted the technology to pro audio in the form of a licensed network control device set and management tool for large audio installs. We had over 40 licensees lined up, starting with QSC (and their brilliant "hear the light" ad campaign), and Rose Garden, Brigham Young, and the GE Palace (BC, ?) were three of the 100+ amplifier stadia we set up. The installers could take a laptop down into the stadium itself and set up the sound from there, a process that they said took only days with VNOS instead of weeks without. Google "From Rock To NOC" for fun.

    The management app from the MidiTap days evolved into VNOS, served to satisfy the needs of stadium sound engineers - all think their way of viewing things is the only way, we gave them a tool so they could indeed manage their way. We got Paul Allen interested and moved to Seattle in 1994. I served on the AES SC-10 standards committee for a couple of years while we were implementing the protocol. Also gave enough input to the MIDI Machine Control standards group to warrant a credit mention in the official document.

    Another strange political mess ensued in 1997 and we closed till 2001. That time we refocussed on only the software and a more traditional IT market. VNOS grew again with the addition of scripting and the Widget Factory. Again the tech got high praise and awards but we once again demonstrated a poor ability to choose our strategic teams and closed for good in 2007.

    (Aside: in the 1999-2000 interim I did the design and FPGA prototype for what later became the DICE II FireWire chip from TC Electric that they and Alesis use in their FW devices. and again in 2008 did the MIDI DAW protocol parser and surface interface for Alesis's Master Control).

    I have ended up with several juicy patents we never had the cash to go after licenses for. Like: a graphical control system reflecting and controlling the state of a setting in a hardware device. Everybody does it now, but we got that issued in 1995. And for the deterministic network communications protocol, MediaLink (1993). We've since lost all the trademarks and domain names. Sad.

    On the bright side, all this remarkable activity around the iThing platform is really invigorating. I want to play, bad. I also (a bit of ego here) think I have a lot of crusty experience to contribute in my copious spare time (from Audio engine design at Nintendo :). I see many mistakes from the 80's being repeated (not by all, but frustratingly by enough, including to my surprise Apple), re limited vision of how to use and present MIDI and audio.

    Audulus is really great for the audio part - just want MIDI at the same level. In fact, Audulus is very close to what I wanted to do in the 80's for my Masters Thesis, but could not get any professor to agree to chair it because it was too cross-discipline (CS, EE, and Music). So basically, I abandoned that and essentially built my own business around working vaguely in that direction.

    You can wake up now.
  • Re part 2: actually, I was envisaging the evaluator just operating on the scripts' input and outputs, so as to avoid the inter-node connection mechanism overheads. But the other way has its own appeal too, and doesn't really affect the user experience. The problem without the expression object, and we discovered this in VNOS too, is that the screen gets very messy when you have to use lots of very primitive objects to compose your algorithm. There's a happy medium between inscrutable large monolithic black boxes and large unreadable graphical meshes of interconnected gate level circuitry. The expression node goes a good way to helping find this sweet spot. Opening up the full language runs the danger of creeping up to the monoliths, but also enables some pretty cool things for which the pure primitive network would be too slow.
  • Wow pretty cool history there. Its interesting how forces independent of the technology itself shaped things. I've got a TC audio interface, so I'm using your DICE II chip :-).

    Also, don't let me repeat those mistakes from the 80's that you mentioned!

    Re MIDI in Audulus: I've hesitated on adding MIDI connections to Audulus mainly for simplicity's sake (both in code and in user interface). But also, the elegance of having a system with only one data type, the signal, appealed to me. I'm open to considering MIDI or some other event-based connections, but I feel like there's still a lot of ground to be covered with the polyphonic signals in Audulus (arpeggiators, granular stuff, etc).

    Re part 2: I agree about the math nodes getting messy without an expression object. Probably I'll just start with a simple expression evaluator that doesn't allow feedback. Its easy for a user to understand and won't run afoul of any of Apple's iOS review restrictions (They've rejected apps which expose forms of textual programming in the past).

    - Taylor
  • Actually I'm with you on the purity of data type issue. But you need to maintain a concept of time in the sense of clock / time code / song position so audio effects can track the rest of a song. I'm thinking of synced arpeggiators here.

    The MIDI bit probably would lend itself to a separate app along the same lines, but one that can do interesting things in conjunction with Audulus.

    Want some comments on the MIDI Keyboard widget then? It is all you need for purity's sake, does the input connection part of it, but does have some room to grow still.

    Codea got past Apple. I think the key was sandboxing the Lua state environment when loading and running user code so it could not access some library primitives and iOS toolboxes directly. They also had to remove project sharing (other than by text copy/paste beween apps, or using iExplore).
  • The Time node reports the host's song position. A synced arpeggiator node could have a clock input.

    Feedback is always appreciated! So let me know what you'd like to see in the Keyboard node.

    Re: codea. That's even more restrictive than I thought. Not being able to share patches in Audulus would really cripple the app. That pretty much rules out exposing any sort of interpreter, don't you think?

    - Taylor
  • @Taylor. Yes, backtracking at the system level. (Although Forth has about as much respect for levels as it does for data types. Meaning is in the eye of the programmer.) Speaking of system levels, look up Open Firmware in Wikipedia. If you had a pre-Intel Mac it had a Forth hidden away at a very deep system level.
  • Hmm. Hadn't thought of that. OK for the expression, I would think, but you're right, may be an issue if scripted in Lua. I think Apple's concern was that one could distribute Apps in a way that bypassed the App store. You could argue that is not the case here, especially if the vocabulary you expose is restricted to Audulus-specific functions and math.

    It is entirely possible some of these are already implemented but I just haven't noticed how to invoke or interpret them yet.

    In any case, these are suggestions, not imperatives.


    1: legato, poly - also want a mono retrigger mode. For glissando. Possibly mono/poly is one selector, retrig / gliss / legato another. With a gliss / legato change rate knob.

    2: input select: need to show a list of available VM / CM sources with the ability to check or uncheck each individually. Look at Little MIDI or NLogPro for examples. (functionality, not style!:).

    Each instance should have its own MIDI source list. Wrap a Class for this, you'll be wanting it everywhere, not just on the keyboard. Possibly even have an App level one and show that by default (inheritance?) here, do copy on write only when the user changes it.

    The default on instantiation would be to have all checked and MIDI channel Omni, which would then behave as you do now. But to get full system level routing capability you need the selectivity so something like midibridge can do its thing without promiscuous listening bypassing it.

    3: I know there is CC learn mode now (thanks!), but it would be nice to have hold, mod and volume outputs from the keyboard too. These are so standard having to learn them every time would be a bit of a chore. Possibly also channel aftertouch. If not already an artefact of Velocity, a trigger output would also be nice.

    CC learning: (this is from how I implemented it in my MIDI interactive WinAmp visualizer)

    1: can't tell when it is learning or what it learned. Be nice if in normal display it showed CC # if one is assigned, say when the connector mode button on the left is pressed to show port names.

    2: Pop up a little dialog showing MIDI source, MIDI channel, and CC # with a learn button, so one can either learn by example or key in the CC # manually; would also show the result of the learn operation.

    3: Highlight the learn button until it acquires a message then drop out of learn mode.

    Patch Change at the App level:
    Preferably selecting against an orderable / assignable map list of patches rather than just the order they fall in alphabetically. That can change and then sequenced patch changes from other Apps or MIDI files get broken.

    If / when you implement MIDI generation, similar things apply, i.e. a list of possible destinations, this time unchecked by default, and a button to enable or hide your own Virtual MIDI source (so other Apps still stuck in promiscuity don't see you). At both the App and widget level.

    Finally, for debug, a MIDI Event monitor added to the Metering set. Checkbox to show events at the App level, input connector for more specific monitoring. Need some thought on how this works exactly, depends on what else you do with MIDI. Our VNOS monitor shows Events in a scrolling text log format. Is very helpful. MIDI is just another Event that happens to come from a MIDI widget.

    Hope this helps and not swamped in detail.


    - David
  • Re Open Firmware: the guy who implemented that is a friend of mine. At one time he had the PPC actually bootable into Windows NT (native, not emulated!), but Apple nixed it at the last moment. If you boot into OF you can inspect the PCI trees and lots of other goodies. And, FORTH being the double-edged no handle sword it is, brick your Mac. But IIRC it was extremely handy for peripheral card debug, also for the folks who did Yellow Dog Linux.
  • Mitch Bradley? I met him at EuroForml in 1990. Great guy. He sang the Open Firmware song. Impressive bass-baritone.
  • I've used Open Firmware actually, but just to do mundane things like booting or maybe clearing some non-volatile memory (I can't remember). I think once I got it to list all the PCI devices in my Mac.

    David, (1) sounds like a neat idea - my nord lead can't do a poly / re-trig mode, which I could see as being useful (one note at a time, but with release). What's the difference between gliss and legato modes? Note that a change rate can be achieved by putting a low-pass filter on the pitch output, though I've wanted to make a Glide node for some time to make that easier.

    (2) Good idea. I'll put that on my to-do list. For now, I'd suggest just using separate MIDI channels.

    (3) I'm trying to keep the keyboard module pretty simple. We could have a more advanced keyboard module with the conveniences you propose. Aftertouch would also be a good one.

    Your MIDI CC learning ideas sound great. I think the first thing I can do easily is highlight the button that is learning and provide better visual feedback as to what's going on. A MIDI event monitor would be great too :-)

    Regarding patch changes. I agree the list of patches should be reorder-able -- I've wanted to do that for a while. However given that loading a patch is a heavy operation (a patch stores its entire history, and assets like samples), its unlikely that there will be support for sequenced patch changes. Instead MIDI program changes are mapped to "Presets" which use Audulus's branching undo system to switch between different patch versions. (

    - Taylor
  • 1: Modes. Glissando mode is like legato, except that the note is retriggered in semitone increments until it reaches the target note. Like running your finger up the keyboard. Legato slides the pitch smoothly without retriggering. I do not think Glissando can be achieved with a LPF, it needs a timer and a counter. The other problem, an LPF for change rate, IIRC traditional synths use a different transfer function than an LPF would provide. Possibly just a cap charging curve, not a roll-off curve. Have to listen to see if it matches.

    3: Keyboard: a second more featured one, with the extra outputs, works for me. When you can get to it.

    4: Just being able to see it learning, and to see what it has learned. Like, when I come back to a Patch, how do I know which of my surface pots turns which Audulus pot?.

    5: Patch Changes: Possibly terminology might need a little adjustment. Your Patches correspond to different synthesizer architectures. This is new, and I can understand them being heavyweight to change between. The conventional Synth Patch does not involve different architectures, just different settings for the pots and switches. So one could see saving settings sets for each Audulus patch as MIDI patches not being so weighty. So response to MIDI Patch Change would just select the settings within the current Audulus Patch.

    Re MIDI source selections: on reflection that is workable if only done at the App level. At minimum a switch to enable or disable port scan and attachment would suffice since most Apps that provide port management do provide the ability to select (or not) Audulus. And filtering options in say MidiBridge or freeEWI can satisfy the rest, short term.

    Other MIDI / Keyboard issues:

    key follow inputs to the ADSRs and filters. To some extent Audulus can already construct this out of a mapper on the pitch output from the keyboard. But most synth programmers are used to a knob on the envelope or filters. Especially for resonance tracking. Just a thought.

    Trigger out from the keyboard. Currently this seems to be implicit in the Velocity output. Make sure there are velocity output blips in the Glissando mode so oscillator and ADSR sync properly on each attack, since there is only one note input for the full gliss and so no provided velocity changes. That would then not need a separate trigger conector.

    You are remarkably tolerant of my ramblings!


    - David
  • Another name for Glissando is Portamento. O forgot that. There's already a thread "Portamento and PWM" on the board about how to construct such a glide. A sub patch would be a nice thing for that.
  • Glissando and portamento are often confused, the same way that guitarists call a whammy bar a trem bar when it does vibrato, not tremolo. Portamento is a continuously variable pitch, and glissando sometimes means that too, and sometimes it means a stepped change - running your fingers across a keyboard.

    Getting a perceptually steady portamento from frequencies would require log and antilog either as nodes or as options in the mapper node. This would allow equal intervals between the pitches of a stepped glissando.
  • I've been fiddling with a way to quantize continuous hertz ranges to programable hertz values to do pitch and scale quantize nag in Audulus. Yes, Afta's quantize sub patch can do the quantize but only to atonal values.
  • I see you are going ahead and implementing expressions. Cool! Thanks for listening> I'd say "can't wait" but it needs to be right first.
  • David, sure thing. I actually ended up writing my own little parser and interpreter for the expression, which was fun. Kinda took me back to my attempts to design a programming language and its interpreter in college.

    Anyway, I've got most of the functions from math.h in there. I still need to implement proper error reporting (for syntax/semantic errors). Let me know if there's anything else I need to do so its up to snuff when I release it :-)

    - Taylor
  • I too enjoy writing little language parsers. For fun, sometimes even for profit!

    Did you notice how you push parts of the parse onto a stack then pop them as you evaluate them? That is pretty much what FORTH does explicitly. You had to turn the infix expression from the user into an RPN sequence to actually evaluate it. Things are connected more than we know ...
  • 2433304-just_read_super.jpg
    432 x 290 - 12K
  • @DCramer, probably what an artist friend of mine calls "dolphin-squeek". I talk about this stuff and all she hears is "squeek, squeek!". OTOH no way can I do what she does.
  • Exactly! And you wouldn't even believe what I do!