Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using OpenAL Soft for mixing #1112

Open
minghia opened this issue Feb 16, 2025 · 1 comment
Open

Using OpenAL Soft for mixing #1112

minghia opened this issue Feb 16, 2025 · 1 comment

Comments

@minghia
Copy link

minghia commented Feb 16, 2025

I want to create an application that takes in multiple audio packets in one format from multiple sources which represents multiple different indivdual streams, where each incoming streams has an identifier, which identifies the individual stream. What I want to be able to do is take in the multiple streams, mix them into my internal network's format and redistribute the incoming audio as a coherently identified audio. Likewise in the same program I need to take in my internal format and send it out from this program. I could have multiple internal nodes sending audio to this program and each audio packet has an identifier which makes each internal audio stream a coherent outgoing stream. In my application I need to do some resampling and format conversion.

I was hoping to use OpenAl. but when I look at the alstream.cpp example, it just takes consecutive audio files and plays them sequentially. I tried to modify the alstream program but I'm having getting the buffers to play properly. Either my processed count is either 4 , the number of buffers created, or 0 and the sometimes I hear audio but it is not sounding like the source which is an 800 Hz sine wave. I am currently creating one device and one context. Do I need to create a context for each incoming stream? Ideally I just want to use this program to mix the audio from the remote streams and send it to my internal nodes and vice versa. Is this possible using OpenAL?

Tony

@kcat
Copy link
Owner

kcat commented Feb 17, 2025

You don't need to create more than one context. With alstream as an example, you would create two separate StreamPlayer objects, initialize them both using their respective audio inputs, then play and continually update both objects each time.

Importantly, make sure you have enough audio being buffered in each stream. If the size of each incoming audio packet is small, and you have one buffer per packet, you may need to either increase the number of buffers, or concatenate multiple packets of audio into a larger size for each buffer. You may also need to hold your own queue of audio packets, since there's no guarantee an async audio source will provide packets right when they can be buffered into OpenAL. They may come a little before or after OpenAL would like them (don't try to unqueue/buffer/requeue samples if you don't have enough input audio yet, and hold on to some extra audio packets in case the input becomes available when OpenAL hasn't processed a buffer for more output yet).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants