Tech Insider					     Technology and Trends


		   Video for Linux Mailing List Archives

List:       linux-video
Subject:    [video4linux] API spec
From:       alexb () jetsuite ! com (Alex Bakaev)
Date:       1998-07-23 22:52:53

Hi,

How final is the spec ? What are the chances that new propositions will
be accepted to change the existing ioctls ?

My opinion is that API set shouldn't really contain things like
VIDIOCSFBUF. As far as capture driver is concerned, it doesn't care if
the video data goes into frame buffer or main memory to be saved off to
the disk. All capture driver cares about is pixel format, image
dimensions, buffer stride and starting address. Clip-capable drivers may
accept clip lists too. The same idea applies to the overlay and chroma
keying. Capture driver shouldn't touch display-related devices.

In this light there is no difference in staring a VBI capture or video
capture, to gfx frame buffer or to main memory. ( one exception may be
indication what field should be captured ). Ioctl is the same in all
cases, the structure describing the image is the same too.

>From reading the API published on
http://roadrunner.swansea.linux.org.uk/v4lapi.shtml

it's not clear how the streaming capture is supported. In Win land it
was the app that was responsible for the stream of buffers passed to the
driver.

Comments ?

Alex
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] API spec
From:       Bill Dirks <dirks () rendition ! com>
Date:       1998-07-24 0:03:14

Hello, Alex. I am much like you. I just arrived from the Windows world.
I used to work for Winnov, where I did VfW drivers for their Videum line
of products for a number of years. Recently I decided it would be a fun
project to write a Linux driver for the Videum. I started coding the
driver about 3 1/2 weeks ago. This is my first experience with Linux.

Alex Bakaev wrote:
> How final is the spec ? What are the chances that new propositions
> will be accepted to change the existing ioctls ?

It's not final. In fact Alan Cox is soliciting proposals. I see several
things I would add or do differently, so I intend to write a new spec
proposal over the next few days (I'm not qualified to comment on
everything in the spec, but I have ideas about some capture-related
things). People have been posting suggestions over the past few days.
Please add your opinions.

> My opinion is that API set shouldn't really contain things like
> VIDIOCSFBUF. As far as capture driver is concerned, it doesn't care if
> the video data goes into frame buffer or main memory to be saved off to
> the disk. All capture driver cares about is pixel format, image
> dimensions, buffer stride and starting address. Clip-capable drivers may
> accept clip lists too. The same idea applies to the overlay and chroma
> keying. Capture driver shouldn't touch display-related devices.

A video4linux driver is a 'char' type device. From an app it's accessed
much like a file or other character stream type device. Capturing is
accomplished through the read() function call:
	buffer = malloc(size);
	int f = open("/dev/video0", O_RDONLY);
	read(f, buffer, size);
Captures a frame into 'buffer'. Read() is a lot like the DVM_FRAME
message in VfW.

[My driver only does capture throught read(), so I'm reaching beyond my
experience here.] The SFBUF is intended for the entirely different case
where the app just wants to put the incoming video on the screen. In
this case the app does not and cannot retrieve the video data.

> In this light there is no difference in staring a VBI capture or video
> capture, to gfx frame buffer or to main memory. ( one exception may be
> indication what field should be captured ). Ioctl is the same in all
> cases, the structure describing the image is the same too.

My understanding is that there is no equivalent to PageLock. It's not
possible to DMA into a user-allocated buffer. All buffers that will be
DMA targets must be allocated internally by the kernel driver. Not all
cards use DMA, but if we are to handle all cases in a consistent manner
we are constrained by this.

> >From reading the API published on
> it's not clear how the streaming capture is supported. In Win land it
> was the app that was responsible for the stream of buffers passed to
> the driver.

I just learned there are a couple (experimental?) ioctls not listed in
the spec for something *vaguely* like VfW streaming. The driver
allocates two buffers. The app tells the driver via ioctl() to capture
to one or other of the buffers. The app then calls another ioctl() that
blocks until the buffer is full. There is something called mmap() that
lets the user read the data somehow. I don't know how that works
exactly. [-> Somebody out there fill us in?] Of course, this has
problems, like, for starters, two buffers isn't enough.

Bill.
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] API spec
From:       alexb () jetsuite ! com (Alex Bakaev)
Date:       1998-07-24 1:09:51

Bill, thanx for the response. My comments are below

Bill Dirks wrote:
> 
> Hello, Alex. I am much like you. I just arrived from the Windows world.
> I used to work for Winnov, where I did VfW drivers for their Videum line
> of products for a number of years. Recently I decided it would be a fun
> project to write a Linux driver for the Videum. I started coding the
> driver about 3 1/2 weeks ago. This is my first experience with Linux.
> 
That's 3 1/2 weeks more than I have :)

> 
> A video4linux driver is a 'char' type device. From an app it's accessed
> much like a file or other character stream type device. Capturing is
> accomplished through the read() function call:
>         buffer = malloc(size);
>         int f = open("/dev/video0", O_RDONLY);
>         read(f, buffer, size);
> Captures a frame into 'buffer'. Read() is a lot like the DVM_FRAME
> message in VfW.
> 

So is read a blocking call ? For true streaming something non-blocking
is needed.

> [My driver only does capture throught read(), so I'm reaching beyond my
> experience here.] The SFBUF is intended for the entirely different case
> where the app just wants to put the incoming video on the screen. In
> this case the app does not and cannot retrieve the video data.
> 

I see what you are saying. There probably need to be a way for a 'free
running' mode. IN my Win95 driver I did have something like this, but
had to unify operations for the WDM driver. I'd say just add a flag to
some struct instead of a separate Ioctl.

> > In this light there is no difference in staring a VBI capture or video
> > capture, to gfx frame buffer or to main memory. ( one exception may be
> > indication what field should be captured ). Ioctl is the same in all
> > cases, the structure describing the image is the same too.
> 
> My understanding is that there is no equivalent to PageLock. 

What ?!?!?. That's a big bummer. There has to be a way to map user
memory into global context, pagelock it and just DMA video data in.
That's probably why I saw people remarking they cannot get 640x480x30fps
which is I had rutinely.

It's not
> possible to DMA into a user-allocated buffer. All buffers that will be
> DMA targets must be allocated internally by the kernel driver. Not all
> cards use DMA, but if we are to handle all cases in a consistent manner
> we are constrained by this.
> 

Being able to DMA into some memory doesn't mean non-DMA cards cannot use
memcpy().

> > >From reading the API published on
> > it's not clear how the streaming capture is supported. In Win land it
> > was the app that was responsible for the stream of buffers passed to
> > the driver.
> 
> I just learned there are a couple (experimental?) ioctls not listed in
> the spec for something *vaguely* like VfW streaming. The driver
> allocates two buffers. The app tells the driver via ioctl() to capture
> to one or other of the buffers. The app then calls another ioctl() that
> blocks until the buffer is full. There is something called mmap() that
> lets the user read the data somehow. I don't know how that works
> exactly. [-> Somebody out there fill us in?] Of course, this has
> problems, like, for starters, two buffers isn't enough.
> 

This is too complicated. Ability to DMA into user buffers and
non-blocking read would do wonders... Of course, I have no idea what it
would take to be able to map user memory in...

Alex
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] API spec
From:       Bill Dirks <dirks () rendition ! com>
Date:       1998-07-24 2:52:48

Alex Bakaev wrote:
> Bill Dirks wrote:
> >  I started coding the driver about 3 1/2 weeks ago. This is my
> > first experience with Linux.
> That's 3 1/2 weeks more than I have :)

Hey, I'm no longer the freshest one on this list! :)

> >         buffer = malloc(size);
> >         int f = open("/dev/video0", O_RDONLY);
> >         read(f, buffer, size);
> > Captures a frame into 'buffer'. Read() is a lot like the DVM_FRAME
> > message in VfW.
> So is read a blocking call ? For true streaming something non-blocking
> is needed.

Works both ways. The above will block. 
	open("/dev/video0", O_RDONLY | O_NONBLOCK);
or something to that effect will be nonblocking mode. Read() will return
the code -EAGAIN if there is no data ready yet.

Actually, read() is quite effective. The only potential inefficiency is
a memcpy() is required for DMA devices. An app with a thread running
blocking read()s in a loop can implement something very much like VfW
streaming. All that's missing is frame rate control. You'd rather not
memcpy() any more frames than you need. 

An ioctl() that blocks until a frame is ready to be read() would be
enough, but it would be more convenient for the app developer if the
driver handled it transparently. Read() would only return frames at the
requested rate. E.g. if the app wants 5 fps, then the loop:
	for (;;) {
		read(f, buf, len);
	}
would iterate at 5 times per second. (Or non-blocking read() would
return -EAGAIN for 1/5th second between frames.)

> > The SFBUF is intended for the entirely different case
> > where the app just wants to put the incoming video on the screen. In
> > this case the app does not and cannot retrieve the video data.
> I see what you are saying. There probably need to be a way for a 'free
> running' mode. IN my Win95 driver I did have something like this, but
> had to unify operations for the WDM driver. I'd say just add a flag to
> some struct instead of a separate Ioctl.

Well, still you have to give the driver the address of the frame buffer,
stride, etc. And you would need a way to start and stop it because
you're not using read(). 

And what happens when the user flips to another virtual desktop in fvwm?
IMHO this sort of thing should always be done to a separate mini frame
buffer (on the VGA) that is overlayed or blted on primary graphics
surface. Actually you probably want to add to that double buffering or
triple buffering to remove tearing.

I would think this would be difficult to do right, and would require
special attention from the X server. But, like I said, this is out of my
experience.

> > My understanding is that there is no equivalent to PageLock.
> What ?!?!?. That's a big bummer. There has to be a way to map user
> memory into global context, pagelock it and just DMA video data in.

I know. All versions of Windows did this easily. PageLock was even a
user-mode call!

> Being able to DMA into some memory doesn't mean non-DMA cards cannot
> use memcpy().

Exactly. Non-DMA cards don't need locked memory, but DMA cards do, so
DMA is the more restrictive case. So we need to design a system that
_can_ work for locked memory. And it will automatically be suitable for
non-DMA cards too.

> > > >From reading the API published on
> > > it's not clear how the streaming capture is supported.
> > I just learned there are a couple ioctls not listed in
> > the spec for something *vaguely* like VfW streaming. The driver
> This is too complicated. Ability to DMA into user buffers and
> non-blocking read would do wonders... Of course, I have no idea
> what it would take to be able to map user memory in...

Well, I can give you the non-blocking read...

Keep the comments coming. You are really churning ideas in my head!

Bill.
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] API spec
From:       Alan Cox <alan () cymru ! net>
Date:       1998-07-24 10:57:04

> My opinion is that API set shouldn't really contain things like
> VIDIOCSFBUF. As far as capture driver is concerned, it doesn't care if
> the video data goes into frame buffer or main memory to be saved off to
> the disk. All capture driver cares about is pixel format, image
> dimensions, buffer stride and starting address. Clip-capable drivers may
> accept clip lists too. The same idea applies to the overlay and chroma
> keying. Capture driver shouldn't touch display-related devices.

The VIDIOCSFBUF ioctl exists primarily as a security boundary. It is 
superuser only so allows the system (or eventually X server) to specify
video space. Its not used for non overlay use (an mmap or read() based
capture pulls to user space).

> it's not clear how the streaming capture is supported. In Win land it
> was the app that was responsible for the stream of buffers passed to the
> driver.

Right now the kernel provides a pair of buffers mapped into user space
and some synchronizations. Post 2.2 when we get DMA to lockable user
pages we can enhance that a bit - it still has complications as it allows
a user to lock down a lot of system memory, which is a resource/security
aspect.

------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] API spec
From:       Alan Cox <alan () cymru ! net>
Date:       1998-07-24 11:02:21

> My understanding is that there is no equivalent to PageLock. It's not
> possible to DMA into a user-allocated buffer. All buffers that will be

Yes. Thats on the 2.3 mm development task list funnily enough - its 
actually non trivial to do right under Linux. 

> DMA targets must be allocated internally by the kernel driver. Not all
> cards use DMA, but if we are to handle all cases in a consistent manner
> we are constrained by this.

We support mmap(). mmap() asks the kernel to map a system object into
our user space. In the case of the bttv driver this happens to be the
capture buffers. That avoids copying data.

This I will document (including adding an ioctl so you can ask how many
buffers and the like) for the next API release, right now its a bt848
specific interface.

Alan
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] API spec
From:       Alan Cox <alan () cymru ! net>
Date:       1998-07-24 11:06:04

> So is read a blocking call ? For true streaming something non-blocking
> is needed.

Read can block or not block as you wish. There is a generic file flag
for the open/close/read/write POSIX API of O_NDELAY which means "return
the try again error if I cant do things right now"

> This is too complicated. Ability to DMA into user buffers and
> non-blocking read would do wonders... Of course, I have no idea what it
> would take to be able to map user memory in...

The basic interface is


	mmap the device		
			[video buffer is now in my address space]
	set the video paramters

	loop through the buffers using the sync operations

I assume "free running mode" in Windows is a single or ring of capture
buffers and you just get whatever is current ?

Alan
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] API spec
From:       Alan Cox < alan () cymru ! net>
Date:       1998-07-24 11:10:40

> And what happens when the user flips to another virtual desktop in fvwm?

Virtual desktops are fine.

> IMHO this sort of thing should always be done to a separate mini frame
> buffer (on the VGA) that is overlayed or blted on primary graphics
> surface. Actually you probably want to add to that double buffering or
> triple buffering to remove tearing.

Yep. Thats something the XFree people have to deal with (and AFAIK are)
the ideal case is YUV422 data to a capture sized buffer on the card and
overlaid. For this case the X server loads the small overlay buffer
data into the VIDIOCSFBUF ioctl not the main frame buffer.

Alan
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] API spec
From:       Gerd Knorr <kraxel () goldbach ! isdn ! cs ! tu-berlin ! de>
Date:       1998-07-24 18:46:25

In lists.linux.video4linux you write:

>> > >From reading the API published on
>> > it's not clear how the streaming capture is supported. In Win land it
>> > was the app that was responsible for the stream of buffers passed to
>> > the driver.
>> 
>> I just learned there are a couple (experimental?) ioctls not listed in
>> the spec for something *vaguely* like VfW streaming. The driver
>> allocates two buffers. The app tells the driver via ioctl() to capture
>> to one or other of the buffers. The app then calls another ioctl() that
>> blocks until the buffer is full. There is something called mmap() that
>> lets the user read the data somehow. I don't know how that works
>> exactly. [-> Somebody out there fill us in?] Of course, this has
>> problems, like, for starters, two buffers isn't enough.
>> 

>This is too complicated. Ability to DMA into user buffers and
>non-blocking read would do wonders... Of course, I have no idea what it
>would take to be able to map user memory in...

Once again the FAQ...

Currently the bttv driver allocates (non-swappable) kernel-memory, and
allows the user procces to mmap() it.  That way the user application can
access the buffer directly, no copying is required.  DMA to user is'nt
available yet for 2.1 / 2.2.  There are a few ioctls to control
grabbing, see below.

  Gerd

-------------------------------------------------------------------------

Initialisation
==============

Grabbing does'nt work if the bt848 chip can't sync, you'll get errno
== EAGAIN then (Overlay does work and gives snow).  You have to make
sure:

  * The driver uses the correct Video Input (VIDIOCSCHAN)
  * The driver uses the correct TV norm (VIDIOCSCHAN,VIDIOCSTUNER)
  * For TV input: There is some station tuned in.

With VIDIOCGCHAN you can ask for available input channels and
informations about these.


Simple grabbing with mmap()
===========================

With bttv you can mmap() the bttv memory.  There is room for two
frames, therefore you can get 2*BTTV_MAX_FBUF bytes mapped.

	fd = open("/dev/video", ...);
	/* ... initialisation ... */
	map = mmap(0,BTTV_MAX_FBUF*2,PROT_READ|PROT_WRITE,MAP_SHARED,fd,0);

Frame 0 starts at map, frame 1 at map+BTTV_MAX_FBUF.

Ok, that's the preparation, now let's start grabbing.  You have to
fill the parameters (size, format, frame) into a struct video_mmap,
and then do

	ioctl(fd,VIDIOCMCAPTURE,&video_mmap);

This instructs the driver to capture a frame.  The ioctl will return
immedantly, the driver will process your request asyncron (interrupt
driven).  If you want to get the result, you have to wait for it using

	ioctl(fd,VIDIOCSYNC,&video_mmap.frame);

If your request is still in progress, the ioctl will block until it is
done.  Otherwise it will return.  That's all, now you have the result
in the mmap()ed area.


Advanced grabbing
=================

The scheme outlined above works fine for single frames.  If you want
do continuous grabbing and keep up with the full frame rate (25 fps
for PAL), it is'nt that simple.  As mentioned above, the driver has
room for two frames.  There is room for two grabbing requests too.

The basic idea for handling full speed is to let work the bttv driver
and the application in parallel.  The application processes the
picture in one of the frames, while the driver captures the next
picture to the other one.  Works this way:

	/* ... initialisation ... */

	ioctl(capture frame 0)

loop:
	ioctl(capture frame 1)
	ioctl(sync    frame 0)
	/*
         * this sync returns if the first request (for frame 0) is done
         * the driver will continue with the next one (for frame 1),
         * while the application can proccess frame 0.  If the
         * application is done, we reuse frame 0 for the next request ...
         */
	ioctl(capture frame 0)
	ioctl(sync    frame 1)
	/*
	 * same procedure, but the other way around: driver captures
         * to frame 0, application proccesses frame 1
	 */
	goto loop;


Pitfalls
========

video4linux is work-in-progress, and there still some interface
changes from time to time due to design bugs.

One problem is that the TV norm (PAL/NTSC/...) is in struct
video_tuner, not struct video_channel.  That's bad if you have a board
without tuner at all and a PAL video recorder connected to Composite1
and a NTSC Camera to Composite2...
Fixing this required changes in both structs and the VIDIOCSCHAN
ioctl.

Another one is that the VIDIOCSYNC ioctl had no argument at all, newer
versions take the frame number as argument.  The new scheme is more
stable.


Happy hacking,

   Gerd

--
Gerd Knorr <kraxel@cs.tu-berlin.de>
-- 
Gerd Knorr <kraxel@cs.tu-berlin.de>
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] API spec
From:       alexb () jetsuite ! com (Alex Bakaev)
Date:       1998-07-24 19:36:21

Gerd,

thanx for the clarifications.

It seems that the ability to DMA into user buffers will be added to the
kernel at some point. This leads to the necessity of having 2 ways of
allocating/mapping buffers: 

1. driver allocated 
2. user allocated. 

Today method 1. is used and mmap() is used by the user to get buffer
pointer ( I don't know how mmap works - does it result in calls to the
driver ? it seems it should, descriptions of mmap I read on the Web
don't go into implementaion details )

Arguably, today's implementaion is far from optimal as big chunks of
locked memory are allocated by the driver ( upon startup ? ) that may
not be used at all.

But even with today's memory management, I think streaming should be
done by user explicitly passing buffers to the driver. So I'm proposing
the following:

1. Allocate ( today it's mmap(); in a future malloc() with page lock )
data buffers;
2. Pass buffers to the driver
3. Start capture ( set the video parameters either prior or in this
ioctl )
4. Start looping like this:

while ( capturing ) {
   while ( BufferNotDone() )
      Sleep();
   ProcessBuffer();
   SendBuffer2Driver(); // read() call ?
   Advance2NextBuffer();
}

BufferNotDone() can be implemented with the help of the driver that sets
a flag in a buffer header ( whatever header is ).

I think this approach is quite simple and allows for efficient use of
capture HW. Also, user can throttle the capture stream, preventing
buffer overwrites ( today it's a reality; HW keeps writing to frame o or
frame 1, even if the app is busy compressing it, unless data is copied
off first ).

An ioctl is needed to query the driver is any frames/fields were lost
due to missing buffers.

A scheme of time stamping is nessesary ( time stamp will go in the
buffer header ). And the juiciest of all is audio syncrhronisation
scheme is needed.

I think it makes sense to have a middle-tier layer between the capture
device and app that implements at least buffer management portion of the
interface. User app is called back when a buffer is done.


Any comments ?

Gerd Knorr wrote:
> 
> In lists.linux.video4linux you write:
> 
> >> > >From reading the API published on
> >> > it's not clear how the streaming capture is supported. In Win land it
> >> > was the app that was responsible for the stream of buffers passed to
> >> > the driver.
> >>
> >> I just learned there are a couple (experimental?) ioctls not listed in
> >> the spec for something *vaguely* like VfW streaming. The driver
> >> allocates two buffers. The app tells the driver via ioctl() to capture
> >> to one or other of the buffers. The app then calls another ioctl() that
> >> blocks until the buffer is full. There is something called mmap() that
> >> lets the user read the data somehow. I don't know how that works
> >> exactly. [-> Somebody out there fill us in?] Of course, this has
> >> problems, like, for starters, two buffers isn't enough.
> >>
> 
> >This is too complicated. Ability to DMA into user buffers and
> >non-blocking read would do wonders... Of course, I have no idea what it
> >would take to be able to map user memory in...
> 
> Once again the FAQ...
> 
> Currently the bttv driver allocates (non-swappable) kernel-memory, and
> allows the user procces to mmap() it.  That way the user application can
> access the buffer directly, no copying is required.  DMA to user is'nt
> available yet for 2.1 / 2.2.  There are a few ioctls to control
> grabbing, see below.
> 
>   Gerd
> 
> -------------------------------------------------------------------------
> 
> Initialisation
> ==============
> 
> Grabbing does'nt work if the bt848 chip can't sync, you'll get errno
> == EAGAIN then (Overlay does work and gives snow).  You have to make
> sure:
> 
>   * The driver uses the correct Video Input (VIDIOCSCHAN)
>   * The driver uses the correct TV norm (VIDIOCSCHAN,VIDIOCSTUNER)
>   * For TV input: There is some station tuned in.
> 
> With VIDIOCGCHAN you can ask for available input channels and
> informations about these.
> 
> Simple grabbing with mmap()
> ===========================
> 
> With bttv you can mmap() the bttv memory.  There is room for two
> frames, therefore you can get 2*BTTV_MAX_FBUF bytes mapped.
> 
>         fd = open("/dev/video", ...);
>         /* ... initialisation ... */
>         map = mmap(0,BTTV_MAX_FBUF*2,PROT_READ|PROT_WRITE,MAP_SHARED,fd,0);
> 
> Frame 0 starts at map, frame 1 at map+BTTV_MAX_FBUF.
> 
> Ok, that's the preparation, now let's start grabbing.  You have to
> fill the parameters (size, format, frame) into a struct video_mmap,
> and then do
> 
>         ioctl(fd,VIDIOCMCAPTURE,&video_mmap);
> 
> This instructs the driver to capture a frame.  The ioctl will return
> immedantly, the driver will process your request asyncron (interrupt
> driven).  If you want to get the result, you have to wait for it using
> 
>         ioctl(fd,VIDIOCSYNC,&video_mmap.frame);
> 
> If your request is still in progress, the ioctl will block until it is
> done.  Otherwise it will return.  That's all, now you have the result
> in the mmap()ed area.
> 
> Advanced grabbing
> =================
> 
> The scheme outlined above works fine for single frames.  If you want
> do continuous grabbing and keep up with the full frame rate (25 fps
> for PAL), it is'nt that simple.  As mentioned above, the driver has
> room for two frames.  There is room for two grabbing requests too.
> 
> The basic idea for handling full speed is to let work the bttv driver
> and the application in parallel.  The application processes the
> picture in one of the frames, while the driver captures the next
> picture to the other one.  Works this way:
> 
>         /* ... initialisation ... */
> 
>         ioctl(capture frame 0)
> 
> loop:
>         ioctl(capture frame 1)
>         ioctl(sync    frame 0)
>         /*
>          * this sync returns if the first request (for frame 0) is done
>          * the driver will continue with the next one (for frame 1),
>          * while the application can proccess frame 0.  If the
>          * application is done, we reuse frame 0 for the next request ...
>          */
>         ioctl(capture frame 0)
>         ioctl(sync    frame 1)
>         /*
>          * same procedure, but the other way around: driver captures
>          * to frame 0, application proccesses frame 1
>          */
>         goto loop;
> 
> Pitfalls
> ========
> 
> video4linux is work-in-progress, and there still some interface
> changes from time to time due to design bugs.
> 
> One problem is that the TV norm (PAL/NTSC/...) is in struct
> video_tuner, not struct video_channel.  That's bad if you have a board
> without tuner at all and a PAL video recorder connected to Composite1
> and a NTSC Camera to Composite2...
> Fixing this required changes in both structs and the VIDIOCSCHAN
> ioctl.
> 
> Another one is that the VIDIOCSYNC ioctl had no argument at all, newer
> versions take the frame number as argument.  The new scheme is more
> stable.
> 
> Happy hacking,
> 
>    Gerd
> 
> --
> Gerd Knorr <kraxel@cs.tu-berlin.de>
> --
> Gerd Knorr <kraxel@cs.tu-berlin.de>
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] API spec
From:       Gerd Knorr <kraxel () goldbach ! isdn ! cs ! tu-berlin ! de>
Date:       1998-07-24 20:51:58

In lists.linux.video4linux you write:

>Gerd,

>thanx for the clarifications.

>It seems that the ability to DMA into user buffers will be added to the
>kernel at some point. This leads to the necessity of having 2 ways of
>allocating/mapping buffers: 

>1. driver allocated 
>2. user allocated. 

>Today method 1. is used and mmap() is used by the user to get buffer
>pointer ( I don't know how mmap works - does it result in calls to the
> driver ?

Yes.  bttv just maps the driver-allocated pages to user address space.

>Arguably, today's implementaion is far from optimal as big chunks of
>locked memory are allocated by the driver ( upon startup ? ) that may
>not be used at all.

It is allocated at open() and freed at close().

>But even with today's memory management, I think streaming should be
>done by user explicitly passing buffers to the driver. So I'm proposing
>the following:

>1. Allocate ( today it's mmap(); in a future malloc() with page lock )
>data buffers;
>2. Pass buffers to the driver
>3. Start capture ( set the video parameters either prior or in this
>ioctl )
>4. Start looping like this:

>while ( capturing ) {
>   while ( BufferNotDone() )
>      Sleep();
>   ProcessBuffer();
>   SendBuffer2Driver(); // read() call ?
>   Advance2NextBuffer();
>}

>BufferNotDone() can be implemented with the help of the driver that sets
>a flag in a buffer header ( whatever header is ).

The SYNC ioctl does this.  Even better (more unix-ish) would be select()
support.

>I think this approach is quite simple and allows for efficient use of
>capture HW. Also, user can throttle the capture stream, preventing
>buffer overwrites ( today it's a reality; HW keeps writing to frame o or
>frame 1, even if the app is busy compressing it, unless data is copied
>off first ).

Wrong.  The driver will not reuse a frame unless you call MCAPTURE 
on that frame again.  If the application can't keep up, the driver
will skip frames.


>An ioctl is needed to query the driver is any frames/fields were lost
>due to missing buffers.

>A scheme of time stamping is nessesary ( time stamp will go in the
>buffer header ). And the juiciest of all is audio syncrhronisation
>scheme is needed.

One of these two is sufficient IMHO.


>I think it makes sense to have a middle-tier layer between the capture
>device and app that implements at least buffer management portion of the
>interface. User app is called back when a buffer is done.

A library for this would be useful.  Color-conversion and other useful
functions could be placed there too...

  Gerd

-- 
Gerd Knorr 
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

			        About USENET

USENET (Users’ Network) was a bulletin board shared among many computer
systems around the world. USENET was a logical network, sitting on top
of several physical networks, among them UUCP, BLICN, BERKNET, X.25, and
the ARPANET. Sites on USENET included many universities, private companies
and research organizations. See USENET Archives.

		       SCO Files Lawsuit Against IBM

March 7, 2003 - The SCO Group filed legal action against IBM in the State 
Court of Utah for trade secrets misappropriation, tortious interference, 
unfair competition and breach of contract. The complaint alleges that IBM 
made concentrated efforts to improperly destroy the economic value of 
UNIX, particularly UNIX on Intel, to benefit IBM's Linux services 
business. See SCO vs IBM.

The materials and information included in this website may only be used
for purposes such as criticism, review, private study, scholarship, or
research.

Electronic mail:			       WorldWideWeb:
   tech-insider@outlook.com			  http://tech-insider.org/