#rust-gamedev

/

      • wrl
        Ralith: i mean if you're not running a thread in a realtime context you don't need a fancy realtime data structure
      • Ralith
        wrl: playing audio usually means you are, though...?
      • that is what the buffer-filling callback is for
      • wrl
        right but that's kinda down to what audio output api you're using
      • easier to wrap a callback-based API and use it in a read/write context than the other way around
      • Ralith
        what does that have to do with the question?
      • apertureless joined the channel
      • magnonellie has quit
      • wrl
        Ralith: i'm not sure what the question was honestly
      • Ralith
        what data structure do you use to send data to the buffer-filling callback?
      • FreeFull has quit
      • raingloom joined the channel
      • FreeFull joined the channel
      • kaare has quit
      • shockham has quit
      • apertureless has quit
      • khernyo joined the channel
      • nexdev9 joined the channel
      • atoav joined the channel
      • echotangoecho joined the channel
      • khernyo has quit
      • wrl
        Ralith: depends on the context in which the callback is running
      • atoav
        Does anybody know how to hide the mouse arrow on a window?
      • Ralith
        presume the context is running in a realtime thread, as portable APIs must, and as we've been discussing
      • wrl
        sure, so
      • well, so there seems to be a misunderstanding here
      • you don't "send data" to a buffer filling callback
      • you pass it a buffer to fill
      • depending on the API
      • in VST, your input and output buffers are arrays of pointers to the buffers themselves, and there is additionally the nbytes/nframes argument
      • Ralith
        I am aware of how the buffer filling callback is called
      • wrl
        okay so where's the part where you send the callback data?
      • you just pass in buffers and call the callback
      • you only need something like a ringbuffer if you need to move data from non-RT to RT or vice versa
      • Ralith
        I am asking how your application code, which decides what sounds should be playing, should communicate that information to the buffer filling callback, so that the buffer callback can fill the buffers it is called with with the appropriate data
      • I am not implementing a sound API
      • I am trying to play sounds
      • a lock-free ringbuffer is not an obviously trivial thing
      • wrl
        okay, so that's a more interesting question – and it's interesting to discuss it in the context of rust because we can discuss the notion of state ownership
      • there's a few different ways of modelling it
      • the audio thread can own its state and other threads can mutate the state by sending the audio thread messages
      • which are then processed, generally, at the beginning of process() or at regular intervals therein
      • the more interesting option, for me, is having the rest of the application own the state and the audio thread is just a reader/observer
      • you end up needing even fancier lockfree dancing in that case but it minimises the amount of housekeeping the audio thread has to do (and therein reduces CPU load, even if just a bit)
      • atoav has quit
      • Ralith
        what is "the state" here, exactly? an intermediate buffer? some logical representation of streams to compose? something else?
      • iow, where do people do mixing/synthesis?
      • wrl
        just application state. whatever the audio thread needs to actually create sound. if you have samples to play, it's something that keeps track of currently playing samples
      • if you're doing a synth, that's your active voices, their states, whatever
      • Ralith
        so you'd do mixing in the realtime thread, typically?
      • wrl
        absolutely.
      • Ralith
        I suppose that makes sense; if you can't do your mixing/synthesis fast enough for realtime you're screwed no matter where you do it
      • wrl
        unless the algo is really, really complex, or there was a lack of care paid to performance while implementing it, you're not going to saturate your CPU
      • especially not if you're just mixing samples
      • mixing is just addition
      • Ralith
        last time I tried mucking with audio I saturated my CPU pretty good doing naive convolution >_>
      • wrl
        yeahhhhh it's called naïve convolution for a reason haha
      • long IRs?
      • refold joined the channel
      • Ralith
        ignoring performance concerns, though, that's a scary amount of lock-free shared state
      • I guess it is what it is
      • yeah
      • wanted to make allowances for large echoey spaces
      • I still don't have a great sense of whether realtime convolution is sane, even with FFTs
      • wrl
        well, i think it's also important to think about DSP-owned and UI-owned data
      • for example – the DSP (audio) thread will absolutely need to update bits of state. oscillator phase, envelope position, playheads, etc etc
      • DSP-owned state should be read-only from outside the DSP thread
      • vice versa with UI-owned
      • it would be really cool to model this with rust types but i haven't really started experimenting yet
      • Ralith: partitioned convolution is pretty damn efficient at this point. I use it in tons of my tracks
      • nearly all of them actually
      • Ralith
        as an outsider I have no idea what "pretty damn efficient" means
      • echotangoecho has quit
      • Murarth joined the channel
      • I guess for my gamey purposes, if I can run a single large filter in realtime without unduly grinding up the CPU that's probably fine
      • partitioned convolution looks interesting, hadn't read about that before
      • phaazon^evoke
        the-kenny-w: :)
      • Ralith
        requiring multiple threads to get the job done does not scream "lightweight"
      • wrl
        Ralith: "pretty damn efficient" means "you can do a lot of it in realtime and still be okay"
      • Ralith
        how much is a lot? what exactly are the criteria for "okay"?
      • nexdev9 has left the channel
      • wrl
        what kind of measurements do you want from me?
      • we're talking in hypotheticals here
      • i don't know the big-O notation for partitioned convolution off the back of my hand
      • Amaranth
        Can I do it in 5ms or less?
      • wrl
        absolutely
      • Amaranth
        If yes, how many CPU cores on my phone will that take? :)
      • wrl
        i mean in general audio doesn't parallelise super well, so one
      • Amaranth
        Ah, I thought that was the main speedup of this technique based on Ralith's comment
      • Ralith
        it seems to be?
      • just from skimming papers
      • wrl: how much of how many cores of a recent intel chip does convolving a single 44kHz stream take?
      • Aceeri_
        has anyone successfully parallelized mp3 decoding?
      • khernyo joined the channel
      • Ralith
        with, say, a 2-second IR
      • ballpark
      • Aceeri_
        I'd assume not, but who knows
      • wrl
        Aceeri_: the problem with parallelising audio is generally the data dependencies. a lot of audio algos are delay-line based or otherwise have data dependencies on past values (sometimes the immediate past) and so the workload doesn't parallelise well
      • generally parallelisation is about working out which things don't have dependencies. for example, two independent sequencer tracks, or several channels of sample playback
      • Aceeri_
        yeah like mp3's main_data
      • Amaranth
        Does MP3 have any kind of index frames? If so you can probably decompress in chunks
      • wrl
        there's the data dependency at the end of the chain (the mixdown) but other than that
      • Aceeri_
        main_data_something?
      • wrl
        Ralith: let me fire up renoise and check
      • Ralith
        it occurs to me that you could make this less latency sensitive and more batchy in a game by convolving your individual samples by each IR of interest ahead of time; then it's just mixing
      • could do it during loading, even, to reduce combinatorial explosions
      • Amaranth
        Can always count on a Minecraft mod to have an example of stuff like this, even if it is simple and slow https://github.com/sonicether/Sound-Physics
      • I doubt it implements that algorithm, just the problem in general
      • Ralith
        no citations? bah
      • wrl
        Ralith: i see about a 1% CPU load for one core for a 3.5s long IR
      • plugin is klangfalter
      • Amaranth
        Oh that looks like it's just driving OpenAL?
      • Nevermind then
      • wrl
      • now this *does* say "multithreaded convolution engine"
      • so!
      • Ralith
        oh, damn
      • wrl
      • Ralith
        okay, definitely worth further exploring realtime post-mixing convolution for game audio, then
      • wrl
        this seems to be a very coarse partitioned convolution
      • yeah FFT based convolution is super cheap
      • the reason it's hard to answer specifics is because it's so cheap i don't even have numbers
      • i'll have like
      • three or four convolutions on one channel sometimes
      • small space, some weird sound, bigger space
      • Ralith
        yeah, I just totally lacked reference points aside from totally blowing out my budget with the naive approach
      • thanks!
      • will definitely have fun rabbitholing on this once I'm baseline satisfied with my graphics and netcode
      • wrl
        if you want Real state-of-the-art and you can use GPL code, fons adriaensen's jconvolver is top-notch