BSD video capture emulation question

Sean Welch welchsm at earthlink.net
Fri Jul 11 11:35:04 PDT 2003


Thanks for the definitions.

So let me see if I can get a hypothetical structure lined up
for visualization.  Please correct me as necessary.
(That document looks like a good start, by the way)

At the lowest layer (closest to the hardware) we have the
driver interfacing with the kernel.  This handles all the
messy grunt-work of twiddling registers, initializing 
hardware, and low level direction of data-flow.

The next layer up (toward userland application level) is
a library level that interfaces with the other side (so
to speak) of the kernel.  This is a tad more abstracted
but knows the differences existing in data manipulation
when dealing with usb, firewire, tuner, etc devices.  Its
function is to present a standardized interface to common
types of hardware devices incorporated in a video capture
device; such as separate video "sources" and "sinks" as
well as possibly clocks and framebuffers.

The top layer is the actual userland application which
interfaces with the library level and only knows how to
request data (say a video feed) from the (main?) framebuffer
for display.  It can send basic commands such as start/stop
video feed, switch video source, and possibly request
filtering of the feeds.

Am I close?

                                              Sean

-------Original Message-------
From: John-Mark Gurney <gurney_j at efn.org>
Sent: 07/11/03 12:28 PM
To: Sean_Welch at alum.wofford.org
Subject: Re: BSD video capture emulation question

> 
> Sean Welch wrote this message on Fri, Jul 11, 2003 at 07:11 -0500:
> I'm glad to see some serious discussion on this.
> 
> That's a lot of acronyms -- anyone care to explain to me what
> IOMMU, PIP, DTRT, and ISTM are?  And what is a video sink?

IO Memory Management Unit.  It basicly remaps the physical memory space
and the PCI memory space.  On sparc64, the address space is greater
than the 32bits of PCI address space.  The IOMMU creates a mapping between
the 32bit PCI address space and the many more bits address space of the
kernel.  This prevents you having to copy data you want to dma down
into the first 4gigs of memory like you have to do on i386 PAE systems.
(PAE is an extension to i386 to support 36bit physicial address space.)

DTRT - Do The Right Thing.

PIP and ISTM I'm unfamilar with.

> John-Mark, could you clarify your concept of the kernel/
> userland split for a new video API?  More particularly, what
> parts would be handled by the kernel and how do you envision
> the userland interacting with that part?  Are we talking about
> creating a new device node (a la v4l) or a new way of interacting
> with existing device nodes?

For the most part, the kernel would just export many different device
nodes, one for each part of the card.  I've started work on a design
document.  It is VERY rough and incomplete, but I'll put it up.

http://people.FreeBSD.org/~jmg/videobsd.html

I'm still debating on how much smarts should be put into the kernel.
Part of me wants to do a good portion of it to prevent the user from
doing something stupid and damaging hardware (like setting two sources
to drive the clocks of the video bus at the same time).  But the more
I'm thinking about it, I want to do most if not all in userland.  This
would make it easier to support usb webcams and firewire devices easier
w/o the user of the library even knowing there was a difference.

So, on a webcam, you have a decoder chip controlling the CCD (or CMOS
sensor) and the controller chip.  You could/would write a userland
driver to interface both of these to the library, and the library would
dynamicly load the module per config file, and any user application
would be able to see the webcam, control the various settings on the
decoder chip, and the codec outputed by the controller.

I'm not sure if I'll go so far as Windows has with being able to stick
n filters between the device and your output.  Adding this support
shouldn't be hard since it's a library and we can at a later date add
additional functions.

> -------Original Message-------
> From: John-Mark Gurney <gurney_j at efn.org>
> Sent: 07/11/03 01:50 AM
> To: Steve O'Hara-Smith <steve at sohara.org>
> Subject: Re: BSD video capture emulation question
> 
> > 
> > Steve O'Hara-Smith wrote this message on Fri, Jul 11, 2003 at 07:09
+0200:
> > On Thu, 10 Jul 2003 13:40:47 -0700
> > John-Mark Gurney <gurney_j at efn.org> wrote:
> > 
> > JMG> Yes, I followed the bktr interface, but the bktr interface needs
to
> > JMG> disappear ASAP!  The bktr interface is very bad as we make
FreeBSD
> > JMG> multiplatform.  It lets the user supply the physical address when
> > JMG> doing video overlay to the video card.  This should be handled by
> the
> > JMG> driver, not the userland app.
> > 
> >       	Hmm, that implies that the driver must know how to find the
> > overlay area that the userland app wants it to use - irrespective of
the
> > video out driver in use.
> 
> Actually, this is very easy.  I was first using svgalib to write my
> overlay program which gave me a userland buffer but NOT a physical
> address.. it was easy to pass this buffer to the driver and have it
> do the right thing.
> 
> This is more complex on sparc64 machines that don't do a direct
> physical to PCI mapping but have an IOMMU, since you need to know
> more about the machine.
> 
> >       	Conclusion the overlay areas have to be entities of some kind in
> > the in the multimedia infrastructure. 
> 
> To a certain extent yes...  but there is work underway to properly
> handle device to device dma..
> 
> > JMG> >       	AFAICS what's needed is someone with some insight into what
> makes
> > JMG> > a good video API if FreeBSD is ever going to get one. The
innards
> > JMG> > of things like ffmpeg and transcode are probably worth looking
at
> > JMG> > as models.
> > JMG> 
> > JMG> Hmmm.  I'll have to look at that.
> > JMG> 
> > JMG> But there is more than just codec handling.  One of the features
> that
> > 
> >       	Oh sure - but seamless plumbing of codecs, sources and sinks is
> > a very desirable feature IMHO. This is the bit that these apps seem to
> > manage.
> 
> Then let them manage that.. :)
> 
> > JMG> the Zoran card supports is the ability to have two sources (since
> as
> > JMG> external video and MJPEG playback) one in a window of the other. 
> But
> > JMG> you need to only have one video clock running the output.  This
> > JMG> should be handled by the video api so the drivers just write the
> raw
> > JMG> interface and the api does the manipulation of the driver.
> > 
> >       	Yep seamless plumbing - so somehow the PIP has to present as a
video
> > sink and DTRT when you plumb the other video source into it - even if
> that
> > turns out to be the output of mplayer playing a VCD and not the
expected
> > other bit from the card. If it can't do it the plumbing must fail.
> 
> Ummm..  you are talking about something completely different.  I'm
> talking about the hardware aspect of it, not the software aspect..
> 
> >       	ISTM the plumbing actions either require a smart plumber or a
> > dialogue between the interfaces being plumbed. The latter seems to fit
> > the UNIX device model better.
> > 
> >       	Are thinking in similar terms ?
> 
> Nope, you are thinking of pure software, and I have been talking about
> pure hardware wrt to the windowing scheme mentioned above.

-- 
  John-Mark Gurney   	   	   	   	Voice: +1 415 225 5579

     "All that I will do, has been done, All that I have, has not."
> 


More information about the freebsd-multimedia mailing list