Project Weevil

Julian Elischer julian at elischer.org
Wed Jun 1 12:23:34 PDT 2005



Steve Roome wrote:

>Julian Elischer wrote:
>  
>
>>* streams to have multiple channels.
>>    
>>
>
>I've been thinking about this sort of thing for a while, in terms of
>wanting something like audio netgraph, (audiograph perhaps?)...
>
>For example...
>
>record /dev/audio_mic_left | stereo-guitar-amp-sim | play 
>  
>

gstreamer allows something like this. It's quite cool.


>So, amp-sim would just be a program that takes in a mono audio stream
>and outputs, with some (hopefully very small) delay a stereo,
>chorused, grungified or whatever version of the sound.
>
>So, assuming one could write a program: "pitch-up" that takes a
>"stream" as stdin (i.e. multiple multiplexed channels in one
>datastream) and I can apply that filter to all the audio channels
>contained in the stream and then output it through some program "play"
>that will sensibly downmix to whatever output format I've got set
>as default, something like:
>
>record /dev/audio_mic_stereo | pitch-up 5semitones | play
>
>But the same program could do the same (in the same invocation) for
>n-channel sound if the right framework was in place. I'd like that,
>but it might be way off where sound is headed.
>
>  
>
>>* channels might include subchannels
>>    (e.g an audio channel may have 5.1 subchannels)
>>    
>>
>
>The nomenclature is confusing, I think an audio channel is one mono
>stream of audio, and anything else is a multiplexed "bundle" (?) of
>channels, I don't understand what you mean by subchannels, but I
>might be alone in my confusion on this one.
>
>  
>
>>* sources and sinks to be either sync or async.  An async sink (sic)
>> might be recording the data do disk and doesn't care if it gets it in 
>>bursts or
>> if it gets it at twice normal speed. A sync client wants th e data 
>>deliverred at
>> a specific rate, or wants to be able to deliver it at a specific 
>>rate). Audio must be keyed
>> to the video by some mechanism to be deliverable in the same way.
>>    
>>
>
>A reliable way of multiplexing audio/video signals together with
>timing pulses included sounds like the first step, but only in that
>it would be really handy for the user to have that provided.
>
>I've no idea how much extra work this would require, or how far this
>is from what you envisage, but I've wanted it for a while now!
>  
>

Well there are two projects I know of that do this in user space.

gstreamer and MAS (you may need to hunt around a bit on google to find MAS)

MAS has the AIM of doing all multimedia but the last I looked they had 
really only done audio.

"Jack" is a good tool for audio and allows a lot of flexibility. it does 
however (last I looked)
require that you link your app with the jack library to take part.  
There is a linux audio "driver"
that simulates an audio device but diverts the output to jack..  
Something like that might be cool
for BSD as well.

It may be that it is not worth doing more than gstreamer, MAS and jack, 
but I'm trying to get
my head around the problem space.

gstreamer and jack are in ports but MAS doesn't seem to have any BSD 
port yet.


>        Steve Roome        
>_______________________________________________
>freebsd-multimedia at freebsd.org mailing list
>http://lists.freebsd.org/mailman/listinfo/freebsd-multimedia
>To unsubscribe, send any mail to "freebsd-multimedia-unsubscribe at freebsd.org"
>  
>


More information about the freebsd-multimedia mailing list