Project Weevil

Jacob Meuser jakemsr at
Sat Jun 4 02:06:58 PDT 2005

On Wed, Jun 01, 2005 at 12:23:33PM -0700, Julian Elischer wrote:
> Steve Roome wrote:
> >Julian Elischer wrote:
> > 
> >
> >>* streams to have multiple channels.
> >>   
> >>
> >
> >I've been thinking about this sort of thing for a while, in terms of
> >wanting something like audio netgraph, (audiograph perhaps?)...
> >
> >For example...
> >
> >record /dev/audio_mic_left | stereo-guitar-amp-sim | play 
> > 
> >
> gstreamer allows something like this. It's quite cool.

it's a nice concept.  I haven't tried it out yet, just looked through
the docs.  it does look pretty cool.  but, it appears, gstreamer is
good only for immediate use, conversion, playback, etc.  it adds
nothing for storage, AFAICS.  sure, you could probably write a small
program to capture raw data from /dev/dsp and /dev/bktr and save it
to whatever format.  and you could write another program to later
play the file back.  and these programs would have to link with
gstreamer.  but wouldn't it be nice to have something BSD licensed
for this?

the mjpegtools -> is (mostly) a collection of
small programs each doing (mostly) one thing.  the format parameters
are passed though headers in YUV4MPEG(5) streams.  essentially, it's
a few bits of information added every so often in the stream.  this
is very useful for preserving information about the stream, should
actual and accurate processing/playback be done at a separate time.
however, the YUV4MPEG format is designed for, well, YUV and MPEG
video only, so it is not really flexible.  it's not BSD licensed

> >So, amp-sim would just be a program that takes in a mono audio stream
> >and outputs, with some (hopefully very small) delay a stereo,
> >chorused, grungified or whatever version of the sound.
> >
> >So, assuming one could write a program: "pitch-up" that takes a
> >"stream" as stdin (i.e. multiple multiplexed channels in one
> >datastream) and I can apply that filter to all the audio channels
> >contained in the stream and then output it through some program "play"
> >that will sensibly downmix to whatever output format I've got set
> >as default, something like:
> >
> >record /dev/audio_mic_stereo | pitch-up 5semitones | play
> >
> >But the same program could do the same (in the same invocation) for
> >n-channel sound if the right framework was in place. I'd like that,
> >but it might be way off where sound is headed.

I don't know about "same invocation".  wouldn't you want to be able to
tell it exactly what you want?  what if you only want n - 1 of n
original channels?  also, AFAIK, demuxing channels must be done in
userspace; it is only possible to get raw (headerless) mono or raw
(headerless) interleaved stereo through OSS?

> > 
> >
> >>* channels might include subchannels
> >>   (e.g an audio channel may have 5.1 subchannels)
> >>   
> >>
> >
> >The nomenclature is confusing, I think an audio channel is one mono
> >stream of audio, and anything else is a multiplexed "bundle" (?) of
> >channels, I don't understand what you mean by subchannels, but I
> >might be alone in my confusion on this one.

think of a DVD ...

------  video  (a single mpeg stream)

------  audio  (a single AC3 stream, that is 5.1 surround)

or a more complex DVD

------  video  (a single mpeg stream)

------  audio  (a single AC3 stream, that is 5.1 surround, english version)

------  audio  (a single MP2 stream, that is stereo, english version)

------  audio  (a single MP2 stream, that is stereo, french version)

------  subtitles  (a single overlay video stream, english version)

------  subtitles  (a single overlay video stream, french version)

now consider MPEG Program Stream -> MPEG Transport Stream

> >
> >>* sources and sinks to be either sync or async.  An async sink (sic)
> >>might be recording the data do disk and doesn't care if it gets it in 
> >>bursts or
> >>if it gets it at twice normal speed. A sync client wants th e data 
> >>deliverred at
> >>a specific rate, or wants to be able to deliver it at a specific 
> >>rate). Audio must be keyed
> >>to the video by some mechanism to be deliverable in the same way.
> >>   
> >>
> >
> >A reliable way of multiplexing audio/video signals together with
> >timing pulses included sounds like the first step, but only in that
> >it would be really handy for the user to have that provided.

there is the complexity of MPEG, but of course it was designed to
handle variable bit rates.  as an industry "standard", it would
presumably qualify as "reliable".  it is _definitely_ patent
encumbered, though.  while mpeg la ( is unlikely to care
about free projects implementing MPEG "IP", commercial users of _any_
implementation of MPEG are still required to pay royalties.  are
there even any BSD licensed MPEG (de)muxing implementations?

there is the simplicity of bsdav(5) ->,
but it was designed for raw data, and hence constant bit rate,
although it could theoretically handle VBR as well.  as an
implementation of the simplest usable muxing format I could imagine,
I can only say that it fulfills my needs.  I would be interested in
feedback as well ;)

<jakemsr at>

More information about the freebsd-multimedia mailing list