Project Weevil

Julian Elischer julian at elischer.org
Tue May 31 12:35:21 PDT 2005


I'd like to throw another thought around here..
{skip to end for comments}

4Front Technologies wrote:

> Mathew Kanner wrote:
>
>> As for realistic plans, I think some of what we have is
>> excellent and we should try to keep it and modernize the sound
>> infrastructure in stages.  The first stage would be to redo the middle
>> layer buffering.  Then the front end kernel-userland and try to factor
>> out OSS support so we aren't so bound to it.
>
>
>
> Hi Matt/FreeBSD Audio developers,
>
> As 4Front Technologies gets ready to announce OSS v4.0 later this year
> (the 10th anniversary of OSS), we'd like to offer our assistance.
>
> Matt, I've sent you a number of emails offering help to get FreeBSD's
> Audio migrated to the OSS 4.0 API which offers 100% backward
> compatibility and yet offers some new  audio/mixer features. The new API
> will really help FreeBSD developers develop drivers for new devices like
> the Intel HDA and USB/Firewire devices. Our extensions are much more
> flexible and fall more in line with the audio/mixers found in modern
> USB/Firewire/onboard devices.
>
> In addition we're working on a totally new sequencer core that should be
> ready by end of this summer.
>
> Scott Long said:
> > ALSA has been the 'next big thing' for the past 5 years, but really
> > doesn't seem to be living up to the promises.
>
> This is true. Infact what we're seeing is that majority of the Linux
> audio app developers are actually using the Jack API
> (http://jackit.sf.net) and now Jack runs on FreeBSD/OSS, ALSA hasn't
> gained any net advantage over OSS in terms of apps.
>
> ALSA is way too complex at the API layer but still very similar to OSS
> at the driver level (naturally since they started from OSS!). We have
> developed a ALSA<->OSS library called SALSA (for Simple ALSA) that gives
> you some level of translation between the few ALSA-only apps and OSS
> compatible drivers and it's under LGPL and we can talk about BSD
> licensing it if you find it useful. See:
> http://www.4front-tech.com/forum/viewtopic.php?t=296
>
> What we have found is that the OSS API is still the most widely deployed
> API since it's easy to understand and easy to write applications to.
>
> Another benefit ofcourse is that when you have closed source apps like
> Skype or RealPlayer for Linux or Linux games like DoomIII using the OSS
> API, having the OSS API on FreeBSD helps FreeBSD users use such apps via
> Linux emulation.
>
> For more information on the upcoming OSS v4.0 API:
> http://manuals.opensound.com/developer/
>
>
>
> Best regards
> Dev Mazumdar
> -----------------------------------------------------------
> 4Front Technologies
> 4035 Lafayette Place, Unit F, Culver City, CA 90232, USA.
> Tel: (310) 202 8530        URL: www.opensound.com
> Fax: (310) 202 0496         Email: info at opensound.com
> -----------------------------------------------------------
> _______________________________________________
> freebsd-multimedia at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-multimedia
> To unsubscribe, send any mail to 
> "freebsd-multimedia-unsubscribe at freebsd.org"


Firstly, Thankyou Dev for your offer!  (what an apt name!)

I would like to take this opportunity to mention that I'm kicking around 
a basic idea for
an "in-kernel" video framework.  A bit like "jack" or "netgraph" but for 
multimedia
streams including video.

Multimedia includes audio, and since "silent movies" have not been the 
standard
for some 80 years or so, such a multimedia stream support would have to 
have
s good interface to the audio world. Any thoughts that anyone has as to 
what are
important features to keep in mind when implementing this framwork and 
its audio
interface would be well received.

The basic features I'm looking at at the moment are:
* streams to have multiple channels.
* channels might include subchannels
     (e.g an audio channel may have 5.1 subchannels)
* Different encodings of the same data may be present at the same
  time in different channels.
     (e.g  a mpeg channel and a raw DV channel,
      or a raw audio channel and an mp3 compressed channel)
* "clients" to attach to the framework as suppliers or sinks.
* Ability to have multiple sinks (e.g. tap of the same video/audio  
stream to multiple
  places).
* Ability for a client to tap only a subset of available channels
  (e.g only the compressed channles)
* sources and sinks to be either sync or async.  An async sink (sic)
  might be recording the data do disk and doesn't care if it gets it in 
bursts or
  if it gets it at twice normal speed. A sync client wants th e data 
deliverred at
  a specific rate, or wants to be able to deliver it at a specific 
rate). Audio must be keyed
  to the video by some mechanism to be deliverable in the same way.
* The framework will allow streams to be 'mixed' by arbitrary modules 
that abide
  by some yet to be defined ABI, so that things such as "picture-in-picture"
  or sound dubbing can be achieved.  Plumbing in these ways needs to be 
able to include
  userland components.
  (this is where the "netgraph" reference comes in.)  (man 4 netgraph)
* The framework will have at least some ability to present a V4L(2) 
interface
  to some devices and applications for porting reasons.

Audio, video and auxhiliary data (such as whiteboard transcripting) needs
to be designed in from the start.

Julian





More information about the freebsd-multimedia mailing list