Tech Insider					   Technology and Trends


			   USENET Archives

List:       linux-video
Subject:    [video4linux] [Fwd: New spec proposals #2]
From:       Christopher Blizzard <blizzard () appliedtheory ! com>
Date:       1998-07-27 4:23:24

I'm re-forwarding this because it was too big and bounced.  I've also
updated the web page with the new propsal.

--Chris

ps.  Dont' call me Mr...I work for a living.  :)

-------- Original Message --------
Subject: New spec proposals #2
Date: Sun, 26 Jul 1998 15:16:59 -0700
From: "Bill Dirks" <dirks@rendition.com>
To: <video4linux@phunk.org>
CC: <blizzard@appliedtheory.com>

I filled in a little more. The video_format structure is moved out of
the VIDIOC_S_FMT ioctl section and given its own section because it's
used in several ioctl. There is now a more-or-less complete streaming
capture proposal. The video standard (NTSC / PAL / SECAM) is separated
out to its own ioctl.

Mr. Blizzard, thanks for posting the last proposals! Here's an update...

Bill.
["v4lprop.htm" (text/html)]

Web page

List:       linux-video
Subject:    Re: [video4linux] [Fwd: New spec proposals #2]
From:       Bill Dirks <dirks () rendition ! com>
Date:       1998-07-27 18:08:32

> Christopher Blizzard wrote:
> 
> I'm re-forwarding this because it was too big and bounced.  I've also
> updated the web page with the new propsal.

Oops. Chris, if it's not too much trouble, next time I'll just mail it
to you, and you put it on your website, and post a message on the list
that it's been updated.

> ps.  Dont' call me Mr...I work for a living.  :)

OK, _Chris_  :-)

Bill.
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] [Fwd: New spec proposals #2]
From:       alexb () jetsuite ! com (Alex Bakaev)
Date:       1998-07-27 21:44:35

This is a big post, sorry :)

[snip]

My opinion is that any type of video capture should be made more or less
uniform and orthogonal. The less destination and data type -specific
ioctls exist, the better and simpler the API will be.

I think it is useful to introduce a notion of a video/data channel.
After the device is open, separate ioctl is issued to open a channel. A
channel can be designated for streaming capture or for overlay. At least
2 more channels are defined for the VBI data, but it may be practical to
combine them into one cahnnel.

It's logical to assume that each video channel is assigned to a specific
video field. But it's possible that field assignments may change. I.E.
when capturing a CIF and below, single field is used. When larger image
is used, a channel may need to grab the other field, thus making other
channel unavailable. Note, that this change can occur only in
non-running mode.

Thus a video channel must have the following capability flags: channel
can capture odd field/even field/interlaced fields/alternating fields

VIDCHAN_ODD   video channel can capture odd fields only
VIDCHAN_EVEN  video channel can capture even fields only
VIDCHAN_INTERLACED video channel can capture odd and even fields,
interlacing them                    automatically
VIDCHAN_ALTERNATING video channel can capture odd and even fields,
alternating them

Any sort of video buffer structure must have a field describing what
kind of video field it is. I think this approach is extendible to the
digital TV realm too.


My other comments are inline.

> 
>     ---------------------------------------------------------------
> 
> Video for Linux API Proposal
> 
> These proposals do not cover the entire Video for Linux spec. The rest
> of the spec I consider OK as is, or I have no opinion on it.
> 
> Multiple Devices per System
> 
> Drivers should be able to support multiple devices, as long as the
> hardware can do it. It's trivial if the driver writer keeps all global
> variables in a device structure that begins with a struct
> video_device. All entry points into the driver pass in a pointer to
> this structure.
> 
> Multiple Opens per Device
> 
> Supporting multiple simultaneous capture operations on the same device
> is not practical because there is no open handle which can be used to
> differentiate the different capture contexts, and because streaming or
> frame buffer capture is impractical for more than one instance.
> 
> However, it would be really good to support two opens on a device for
> the purpose of having one open be for capturing, and the other for a
> GUI control panel application that can change brightness, select
> inputs, etc. along side the capturing application. A standard video
> control panel that works with all Video for Linux devices and that can
> run concurrently with any capturing application would be very cool,
> and would relieve all the application developers from each having to
> incorporate their own control panel.

Agree.

> 
> [We could also support a scheme where there is one open controlling
> the capture, but other opens have access to the mmapped buffers and
> can select() on the driver.]
> 

That's why I'm proposing the idea of channels.

> 
> 
> Query Capabilities - VIDIOC_G_CAP
> 
> This ioctl call is used to obtain the capability information for a
> video device. The driver will fill in a struct video_capability
> object.
> 
>                       struct video_capability
> char name[32]     Friendly name for this device
> int type          Device type and capability flags (see below)
> int inputs        Number of video inputs that can be selected
> int audios        Number of audio inputs that can be selected
> int maxwidth      Best case maximum image capture width in pixels
> int maxheight     Best case maximum image capture height in pixels
> int minwidth      Minimum capture width in pixels
> int minheight     Minimum capture height in pixels
> int maxframerate  Maximum capture frame rate
> int reserved[4]   reserved for future capabilities
> 
>                Capability flags used in the type field:
> VID_TYPE_CAPTURE     Can capture frames via the read() call
> 
> VID_TYPE_STREAMING   Can capture frames asynchronously into
>                      pre-allocated buffers
> 
> VID_TYPE_FRAMEBUF    Can capture directly into compatible graphics

This shouldn't be here. Capture driver shouldn't concern itself where
data is going. Even if a capture HW doesn't support 'directly' ( DMA )
capturing into gfx device, passing a non-DMA capture driver address of
the frame buffer would work just fine. At least it would be faster than
capturing a frame into mmap)_ed memory and then copying it to overlay
memory.

>                      frame buffers
> VID_TYPE_SELECT      Supports asynchronous I/O via the select() call

Don't know exactly how select() works, but is this needed here ? Won't
an async select just turn into sync call if device doesn't support it ?

> VID_TYPE_TUNER       Has a tuner of some form
> VID_TYPE_MONOCHROME  Image capture is grey scale only
> 

This flag is redundant, as color format capabilities will report this
nicely.

> VID_TYPE_CODEC       Can compress/decompress images separately from
>                      capturing
> 
> VID_TYPE_FX          Can do special effects on images separately from
>                      capturing
> 


Where is the video formats capability reported ? Do all the sizes
reported in this struct reflect all possible max sizes ( that vary with
video format ) ?


> Note that the minimum and maximum image capture dimensions are for
> comparison purposes only. The actual maximum size you can capture may
> depend on the capture parameters, including the pixel format,
> compression (if any), the video standard (PAL is higher resolution
> than NTSC), and possibly other parameters. Same applies to maximum
> frame rate. The minimum and maximum sizes do not imply that all
> combinations of height/width within the range are possible. For
> example, the Quickcam has three settings.
> 
> Capture to a frame buffer might not work depending on the capabilities
> of the graphics card, the graphics mode, the X Windows server, etc.
> 
> 
> 
> The Video Image Format Structure - struct video_format
> 
> The video image format structure is used in several ioctls. This
> structure completely defines the layout and format of an image or
> image buffer, including width, height, depth, pixel format, stride,
> and total size.
> 
>                           struct video_format
> int width         Width in pixels
> int height        Height in pixels
> 
> int depth         Average number of bits allocated per pixel. Does
>                   not apply to compressed images.
> int pixelformat   The pixel format or type of compression
> int flags         Format flags
> 
> int bytesperline  Stride from one line to the next. Only applies if
>                   the FMT_FLAG_BYTESPERLINE flag is set.
> 
> int sizeimage     Total size of the buffer to hold a complete image,
>                   in bytes
> 
> The depth is the amount of space in the buffer per pixel, in bits. The
> pixel information may not fill all bits allocated, e.g. RGB555 and
> RGB32. Leftover bits are undefined. For planar YUV formats the depth
> is the average number of bits per pixel. For example, YUV420 is eight
> bits per component, but the U and V planes are 1/4 the size of the Y
> plane so the average bits per pixel is 12. The pixelformat values and
> flags values are defined in the tables below.
> 
> Bytesperline is the number of bytes of memory between two adjacent
> lines. Since most of the time it's not needed, bytesperline only
> applies if the FMT_FLAG_BYTESPERLINE flag is set. Otherwise the field
> is undefined and must be ignored. For YUV planar formats, it's the
> stride of the Y plane.
> 
> Sizeimage is usually either width*height*depth /8 for uncompressed
> images, but it's different if bytesperline is used since there could
> be some padding between lines.
> 
>               Values for the pixelformat and depth fields
> PIX_FMT_RGB555    16  RGB-5-5-5 packed RGB format. High bit undefined
> PIX_FMT_RGB565    16  RGB-5-6-5 packed RGB format
> 
> PIX_FMT_RGB24     24  RGB-8-8-8 packed into 24-bit words. B is at
>                       byte address 0.
> 
> PIX_FMT_RGB32     32  RGB-8-8-8 into 32-bit words. B is at byte
>                       address 0. Top 8 bits are undefined.
> PIX_FMT_GREY      8   Linear grey scale. Greater values are brighter.
>                       YUV, planar, 8 bits/component. Y plane,
> PIX_FMT_YVU9      9   1/16-size V plane, 1/16-size U plane. (Note: V
>                       before U)
>                       YUV 4:2:0, planar, 8-bits per component. Y
> PIX_FMT_YUV420    12  plane, 1/4-size U plane, 1/4-size V plane.
>                       (Note: U before V)
> 
> PIX_FMT_YUYV      16  YUV 4:2:2, 8 bits/component. Byte0 = Y0, Byte1
>                       = U01, Byte2 = Y1, Byte3 = V01, etc.
> PIX_FMT_UYVY      16  Same as YUYV, except U-Y-V-Y byte order
> PIX_FMT_HI240     8   Bt848 8-bit color format
> PIX_FMT_YUV422P8  8   8 bits packed as Y:4 bits, U:2 bits, V:2 bits
> 

Hopw exactly will the depth and pixel format be set for the UYVY and
YUYV, for example ? It seems to be ambiguous right now. I think a single
field that completely describes pixel format is sufficient.


>             Flags defined for the video_format flags field
> FMT_FLAG_BYTESPERLINE  The bytesperline field is valid
> 

I'd say not needed. Always use stride ( that's always the case for the
overlay and primary surface modes anyway; also, results in smaller and
faster code ).

> FMT_FLAG_COMPRESSED    The image is compressed. The depth and
>                        bytesperline fields do not apply.
> FMT_FLAG_INTERLACED    The image consists of two interlaced fields
> 

See my proposal at the top of the message.

> Reading Captured Images - read()
> 
> This capture mode is supported if the VID_TYPE_CAPTURE flag is set in
> the struct video_capabilities. Each call to read() will fill the
> buffer with a new frame. The driver may fail the read() if the length
> parameter is less than the required buffer size specified by the
> VIDIOC_G_FMT ioctl. This is reasonable since each call to read()
> starts over with a new frame, and a partial frame may be nonsense
> (e.g. for a compressed image) or impractical or inefficient to
> implement in the driver.
> 
> Non-blocking read() mode is supported in the usual way. Read() does
> not work if either streaming capture or hardware frame buffer capture
> is active.
> 

Don't see why capturing to frame buffer should cause read() to fail (
unless both fields are used for overlay ). Not sure how to reconcile
semantics of read() with the channels idea.

> 
> 
> Capturing to a Hardware Frame Buffer - VIDIOC_G_FBUF, VIDIOC_S_FBUF,
> VIDIOC_G_WIN, VIDIOC_S_WIN, VIDIOC_CAPTURE
> 
> This capture mode is supported if the VID_TYPE_FRAMEBUF flag is set in
> the struct video_capabilities. [This is very much like the current
> spec. We might add some get-capture-card-capabilities thing. For
> example the card I have can only DMA YUV4:2:2 data.]
> 

Again, I think capture driver shouldn't concern itself with where data
is going to...
Driver may advise the app that DMA is going to be used or not, for a
particular capture operation; don't know if it's going to be that
useful.

> VIDIOC_S_FBUF sets the frame buffer parameters. VIDIOC_G_FBUF returns
> the current parameters. The structure used by these ioctls is a struct
> video_buffer. Ideally the frame buffer would be a YUV 4:2:2 buffer the
> exact size (or possibly with some line padding) of the capture. It
> could also be the primary graphics surface, though. You must also use
> VIDIOC_S_WIN to set up the placement of the video window.
> 
>                         struct video_buffer
> void *base               Physical base address of the frame buffer.
> struct video_format fmt  Physical layout of the frame buffer
> int flags                Additional frame buffer type flags
> 

Again, not sure I agree with separate approaches to overlay capture and
mem capture.

>              Flags for the struct video_buffer flags field
> FBUF_FLAG_PRIMARY  The frame buffer is the primary graphics surface
> 
> FBUF_FLAG_OVERLAY  The frame buffer is an overlay surface the same
>                    size as the capture
> 

How a capture device going to use these flags ?

> Note that the buffer is often larger than the visible area, and so the
> fmt.bytesperline field is most likely valid. XFree86 DGA can provide
> the parameters required to set up this ioctl.
> 

As I suggested above, stride is always used.

> VIDIOC_G_WIN and VIDIOC_S_WIN work just like the existing VIDIOCGWIN
> and VIDIOCSWIN ioctls. Except:
> 
>   1. The width and height fields of the struct video_window reflect
>      the width and height of the image on the screen, not the width
>      and height of the capture. In other words, the captured image may
>      appear stretched on screen.
>   2. If the buffer is an overlay surface, the video data is always
>      written into the buffer at coordinate 0,0 at the capture
>      dimensions. (And it is up to X Windows and the application to
>      place the overlay on the screen.)
>   3. These ioctls only apply to frame buffer capture. The capture
>      dimensions are set with the VIDIOC_S_FMT ioctl.
> 

Don't agree these are needed. Width and height are always of the
*capture image*. Stride defines the sub-rect. Image is always written at
0, 0. It's up to the app( via X Server ) to change starting address is
window is moved.


> VIDIOC_CAPTURE is the same as the existing VIDIOCCAPTURE ioctl.
> 
> 
> 
> Capturing Continuously to Pre-Allocated Buffers - VIDIOC_STREAMBUFS,
> VIDIOC_QUERYBUF, VIDIOC_STREAM, VIDIOC_QBUF, VIDIOC_NEXTBUF,
> VIDIOC_DQBUF
> 

Ah, a juicy one :)

> This capture mode is supported if the VID_TYPE_STREAM flag is set in
> the struct capture_capabilities.
> 
> First, the application must call VIDIOC_STREAMBUFS with the number and
> type of buffers that it wants. 

I say just let app call mmap() as many times as it wants, or until it
fails.
This is especially true as I don't se how the driver is going to
determine how many buffers it will have to let allocate. Number of
buffers that driver will be able to allocate depend on the data format
and size, so app must make sure those are setup beforehand.

Upon return the driver will fill in how
> many buffers it will allow to be allocated. This ioctl takes a struct
> video_streambuffers object, see below. The only flag that's valid on
> VIDIOC_STREAMBUFS is BUF_FLAG_DEVICEMEM. To allocate the buffers call
> VIDIOC_QUERYBUF for each buffer to get the details about the buffer,
> and call mmap() to allocate and map the buffer. 

Too complicated. Just start calling mmap() in a loop.

VIDIOC_QUERYBUF takes
> a struct video_buffer object with the index field filled in to
> indicate which buffer is being queried.
> 
Are you saying that buffers can be different ? Is that's what index if
for ?

> To do the capturing, call VIDIOC_QBUF to enqueue the buffers you want
> to be filled. This ioctl takes a struct video_buffer with the index
> field filled in to indicate which buffer to queue. 

I think the index just complicates things...

The driver will
> internally queue the buffers in a capture queue. Then call
> VIDIOC_STREAM with the value of 1 to commence the capturing process.

[snip]

> 
> The driver will begin filling the buffers with frame data. Only
> buffers that have been queued will be filled. 

There shouldn't be any other buffers, but queued ones. Internally driver
may have some housekeeping structures for proper clean-up ( for bad apps
), but non-queued buffer is a non-existent buffer as far as capture
driver is concerned.

Once a buffer is filled,
> it will not be filled again until it has been explicitly dequeued and
> requeued by the application. 

I say filled up buffer is dequeued implicitly. 

The application can sleep until the next
> frame is done by calling VIDIOC_NEXTBUF, or select(). The two are
> equivalent. VIDIOC_NEXTBUF has no parameter. If no buffers are done
> then VIDIOC_NEXTBUF/select() will block until a buffer is done. If
> there is(are) already a buffer(s) done, then VIDIOC_NEXTBUF/select()
> will return immediately. It is not possible to wait on a specific
> buffer if there is more than one buffer queued. Call VIDIOC_DQBUF to
> dequeue the next ready buffer.

Extra work. All this should be implicit. After all, buffer number isn't
important.

 VIDIOC_DQBUF takes a struct
> video_buffer objec. The driver will fill in all the fields. It is not
> possible to dequeue a specific buffer; buffers are always dequeued in
> the order in which they were captured. 

One more reason for the implicit dequeueing.

The bytesused field indicates
> how much data is in the buffer. After the data has been read out, the
> buffer should be queued up again to keep the frames flowing
> continuously. 

Agreed.

> 
> An application can call VIDIOC_QUERYBUF at any time for any buffer,
> and the driver will return the current status of the buffer. 

Not sure what this is for. You always know that at least one buffer is
being captured to. Others are waiting in line.

You can
> dynamically throttle the capture frame rate by only queueing buffers
> at the rate you want to capture.
> 

yep.

> Call VIDIOC_STREAM with the value of 0 to turn off streaming. If any
> buffers are queued for capture when streaming is turned off, they
> remain in the queue. 

It might be cleaner/easier for the driver to unqueue all outstanding
buffers.

Use munmap() to free the buffers.
> 
> There are certain things you can't do when streaming is active, for
> example changing the capture format, reading data through the read()
> call, or munmap()ing buffers.
> 

Yep.

> 
[snip]

> 
> Waiting for Frames Using select()
> 
> The driver supports the select() call on its file descriptors if the
> VID_TYPE_SELECT flag is set in the struct capture_capabilities. If
> neither streaming nor frame buffer capture is active, select() returns
> when there is data ready to be read with the read() call. 

This means driver has to have internal buffer for temp storage, right ?
I'd say don't use select in single frame capture, just read() which will
block.

Again, I'm not so shure how to reconcile the idea of video channels with
select() and read().

If streaming
> capture is running, select() returns when the next buffer is filled.
> The caller should be sure there is a buffer in the queue first. If
> frame buffer capture is running select() returns when the next frame
> has been written to the frame buffer.
> 
> 
> 
> Capture Parms - VIDIOC_G_PARM, VIDIOC_S_PARM
> 
> This is to control various parameters related to video capture. These
> ioctls use struct video_parm objects. The microsecperframe field only
> applies to read() and streaming capture. Capture to frame buffer
> always runs at the natural frame rate of the video.
> 

This may not always be desireable. Rephrase that - sometimes it's good
to have the same frame rate on-screen as in capture. This will provide
real-time visual feedback on how frames are captured. So there is no
suprise when file has only half the frames.
[snip]


Hope no-one got bored to death,

Alex
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] Driver capabilities
From:       Bill Dirks <dirks () rendition ! com>
Date:       1998-07-28 1:46:28

Alex Bakaev wrote:
> > Query Capabilities - VIDIOC_G_CAP
> >
> >                Capability flags used in the type field:
> > VID_TYPE_CAPTURE     Can capture frames via the read() call
> > VID_TYPE_STREAMING   Can capture frames asynchronously into
> >                      pre-allocated buffers
> > VID_TYPE_FRAMEBUF    Can capture directly into compatible graphics
> 
> This shouldn't be here. Capture driver shouldn't concern itself where
> data is going.

The driver concerns itself with where the data is going for many
reasons:
1. The driver may or may not have to allocate the destination memory
area itself.
2. The final destination could be in a user virtual memory space that
only exists when a particular process is the running process.
3. The final destination may not be known before the capture is
initiated.
4. The destination could be specified by a physical (bus) address
instead of a virtual address.

Each of the above capture modes are distinctly different.


> Even if a capture HW doesn't support 'directly' ( DMA )
> capturing into gfx device, passing a non-DMA capture driver address of
> the frame buffer would work just fine.

No. The frame buffer address is a bus address, not a virtual address,
and may not be accessible to the driver software. Not even with
bus_to_virt().

> > VID_TYPE_SELECT      Supports asynchronous I/O via the select() call
> 
> Don't know exactly how select() works, but is this needed here ? Won't
> an async select just turn into sync call if device doesn't support it?

Either select() support is required or we need this flag.

> > VID_TYPE_MONOCHROME  Image capture is grey scale only
> 
> This flag is redundant, as color format capabilities will report this
> nicely.

Maybe, but one of the purposes of this ioctl is for the case where an
app is scanning all the dev/videoN devices looking for certain
capabilities. It's just convenient to have some flags like this.

And I don't have any color format caps yet.

> Where is the video formats capability reported?

Undefined. It would be nice if there were something, but usually the app
knows what it wants, or has a short list of formats it can use, and can
just try them one by one until it gets a 'hit'. Reporting all the
possibilities and the characteristics and limits of each one in a useful
way is very complicated.

> Do all the sizes reported in this struct reflect all possible max
> sizes ( that vary with video format ) ?

Nope. It's up to the driver writer how to advertise. It's not worth that
much I agree, especially for devices that can do compressed capture.

Bill.
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] streaming capture
From:       Bill Dirks < dirks () rendition ! com>
Date:       1998-07-28 4:39:19

Best for last. :-)

Alex Bakaev wrote:
> > Capturing Continuously to Pre-Allocated Buffers - VIDIOC_STREAMBUFS,
> > VIDIOC_QUERYBUF, VIDIOC_STREAM, VIDIOC_QBUF, VIDIOC_NEXTBUF,
> > VIDIOC_DQBUF
> 
> Ah, a juicy one :)
:)

> > First, the application must call VIDIOC_STREAMBUFS with the number and
> > type of buffers that it wants.
> 
> I say just let app call mmap() as many times as it wants, or until it
> fails.

1. You can't call mmap() yet because you don't know what size to ask
for.
2. You need to allocate the array of buffer pointers so you need to know
how many. The app could also fail if didn't get what it thought it
needed.
3. I also wanted to have buffer attribute flags. I want to have the
flexibility of asking for buffer types because hardware could have
special memory on board, or semantics or something.

> This is especially true as I don't se how the driver is going to
> determine how many buffers it will have to let allocate.

Well, this is the same problem as determining when to fail mmap(). There
has to be a check of some kind otherwise an app could allocate locked
memory until the whole system freezes or crashes.

> Number of buffers that driver will be able to allocate depend on the data
> format and size, so app must make sure those are setup beforehand.

Yes. I should have stated that.

> > To allocate the buffers call
> > VIDIOC_QUERYBUF for each buffer to get the details about the buffer,
> > and call mmap() to allocate and map the buffer.
> 
> Too complicated. Just start calling mmap() in a loop.

VIDIOC_QUERYBUF gives you the parameters you need to pass to mmap().

> > VIDIOC_QUERYBUF takes
> > a struct video_buffer object with the index field filled in to
> > indicate which buffer is being queried.
> Are you saying that buffers can be different?
> Is that's what index is for ?

I'm saying there's more than one. What's relevant here is the 'offset'
field which will be different on each one, and which you need to pass to
mmap().

> > To do the capturing, call VIDIOC_QBUF to enqueue the buffers you want
> > to be filled. This ioctl takes a struct video_buffer with the index
> > field filled in to indicate which buffer to queue.
> 
> I think the index just complicates things...

We *could* queue buffers in implicit numerical order. Yes, that's not a
bad idea! (I was thinking of the existing VIDIOCMCAPTURE ioctl which
takes a buffer number, and the VfW ADDBUFFER message which also
explicitly indicates the buffer by a buffer header pointer, but it's not
needed for a queue that's entirely driver-managed!)

> > The driver will begin filling the buffers with frame data. Only
> > buffers that have been queued will be filled.
> 
> There shouldn't be any other buffers, but queued ones.

Short answer: A buffer is non-queued while the app is reading it.

Let me tell you what I was thinking when I wrote that. There are three
kinds of buffers. 
1. Buffers in the capture queue waiting to be filled with data.
2. Buffers in the done queue waiting for the app to read the data out. 
3. Buffers that have been removed from the done queue, but have not been
placed back on the capture queue.
Initially all buffers are the third kind. _QBUF puts buffers in the
capture queue. The driver/interrupt moves them from the capture queue to
the done queue. _DQBUF removes them from the done queue. After _QBUF and
before _DQBUF is when the driver can read out the data.

> Once a buffer is filled,
> > it will not be filled again until it has been explicitly dequeued and
> > requeued by the application.
> 
> I say filled up buffer is dequeued implicitly.

Huh? Then it's non-queued.

> The application can sleep until the next
> > frame is done by calling VIDIOC_NEXTBUF, or select(). The two are
> > equivalent. VIDIOC_NEXTBUF has no parameter. If no buffers are done
> > then VIDIOC_NEXTBUF/select() will block until a buffer is done. If
> > there is(are) already a buffer(s) done, then VIDIOC_NEXTBUF/select()
> > will return immediately. It is not possible to wait on a specific
> > buffer if there is more than one buffer queued. Call VIDIOC_DQBUF to
> > dequeue the next ready buffer.
> 
> Extra work. All this should be implicit.

I'm not sure what you're saying should be implicit. The app should sleep
until a buffer is ready.

> After all, buffer number isn't important.

The buffer number is needed to read out the data. You have to know which
buffer to read from. Each call to mmap() returns a pointer. The app
would store those in an array or some sort. The index is the index into
that array.

>  VIDIOC_DQBUF takes a struct
> > video_buffer objec. The driver will fill in all the fields. It is not
> > possible to dequeue a specific buffer; buffers are always dequeued in
> > the order in which they were captured.
> 
> One more reason for the implicit dequeueing.

I don't understand what you mean by implicit dequeueing.

> > An application can call VIDIOC_QUERYBUF at any time for any buffer,
> > and the driver will return the current status of the buffer.
> 
> Not sure what this is for. You always know that at least one buffer is
> being captured to. Others are waiting in line.

This already exists for when you are mmap()ing buffers. I was just
mentioning that the ioctl always works. You probably don't need it
during capture. Maybe for debugging? There's no additional code in the
driver for this.

> > You can dynamically throttle the capture frame rate by only queueing
> > buffers at the rate you want to capture.
> 
> yep.

Clearly, explicit queueing is needed for that.

> > Call VIDIOC_STREAM with the value of 0 to turn off streaming. If any
> > buffers are queued for capture when streaming is turned off, they
> > remain in the queue.
> 
> It might be cleaner/easier for the driver to unqueue all outstanding
> buffers.

There will be unread buffers that need to be read. The driver must allow
that. 
You're right that there needs to be a way to return the system to a
known state. We could do it explicitly with a yet another ioctl, but I
think it's sufficient to just say "all buffers get flushed (non-queued)
when capture is turned on by VIDIOC_STREAM,1."

Ok, that and VIDIOC_QBUF takes no parameter.

Bill.
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] Driver capabilities
From:       alexb () jetsuite ! com (Alex Bakaev)
Date:       1998-07-28 18:40:38

Bill Dirks wrote:
> 
> Alex Bakaev wrote:
> > > Query Capabilities - VIDIOC_G_CAP
> > >
> > >                Capability flags used in the type field:
> > > VID_TYPE_CAPTURE     Can capture frames via the read() call
> > > VID_TYPE_STREAMING   Can capture frames asynchronously into
> > >                      pre-allocated buffers
> > > VID_TYPE_FRAMEBUF    Can capture directly into compatible graphics
> >
> > This shouldn't be here. Capture driver shouldn't concern itself where
> > data is going.
> 
> The driver concerns itself with where the data is going for many
> reasons:

That's what I'm trying to get rid of. There are no reasons why driver
*must* concern itself with that.

> 1. The driver may or may not have to allocate the destination memory
> area itself.

Irrelevant. Buffer allocation functionality is separate from capture
functionality. It's perfectly O.K. for the driver to allocate a buffer
and then have user pass that buffer to the driver.

> 2. The final destination could be in a user virtual memory space that
> only exists when a particular process is the running process.

So page locking/global mapping is required. This doesn't mean driver
should care where the memory lives.

> 3. The final destination may not be known before the capture is
> initiated.

? You are not suggesting we start capture with no buffers ? Even if you
do, what does it change as far as driver is concerned ? Driver just
needs a virtual address.

> 4. The destination could be specified by a physical (bus) address
> instead of a virtual address.
> 
It doesn't have to. I'd say address *always* has to be virtual. It's up
to the driver to obtain physical address if it needs it.

> Each of the above capture modes are distinctly different.
> 

Not really. At least I didn't see any difference developing Windows
bt848 drivers.

> > Even if a capture HW doesn't support 'directly' ( DMA )
> > capturing into gfx device, passing a non-DMA capture driver address of
> > the frame buffer would work just fine.
> 
> No. The frame buffer address is a bus address, not a virtual address,
> and may not be accessible to the driver software. Not even with
> bus_to_virt().
> 

Not sure why ( what is *bus* address anyway ? ). There may not be a way
to convert a virtual address to physical address ? Then OS must be
updated.

> > > VID_TYPE_SELECT      Supports asynchronous I/O via the select() call
> >
> > Don't know exactly how select() works, but is this needed here ? Won't
> > an async select just turn into sync call if device doesn't support it?
> 
> Either select() support is required or we need this flag.
> 
> > > VID_TYPE_MONOCHROME  Image capture is grey scale only
> >
> > This flag is redundant, as color format capabilities will report this
> > nicely.
> 
> Maybe, but one of the purposes of this ioctl is for the case where an
> app is scanning all the dev/videoN devices looking for certain
> capabilities. It's just convenient to have some flags like this.
> 
> And I don't have any color format caps yet.
> 
> > Where is the video formats capability reported?
> 
> Undefined. It would be nice if there were something, but usually the app
> knows what it wants, or has a short list of formats it can use, and can
> just try them one by one until it gets a 'hit'. 

Or driver reports its color formats to the app and app displays them for
a user to select one.

Reporting all the
> possibilities and the characteristics and limits of each one in a useful
> way is very complicated.
> 

I'd say all that has to be reported is a list of color formats. Then app
must know size restrictions, etc.


Alex
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] streaming capture
From:       alexb () jetsuite ! com (Alex Bakaev)
Date:       1998-07-28 18:58:38

Bill Dirks wrote:
[snip]
> >
> > I say just let app call mmap() as many times as it wants, or until it
> > fails.
> 
> 1. You can't call mmap() yet because you don't know what size to ask
> for.

Size is defined by the image dimentions and color format.

> 2. You need to allocate the array of buffer pointers so you need to know
> how many. The app could also fail if didn't get what it thought it
> needed.

You can have an upper limit in the app, like 100 buffers. It's not that
big of an overhead.

> 3. I also wanted to have buffer attribute flags. I want to have the
> flexibility of asking for buffer types because hardware could have
> special memory on board, or semantics or something.
> 

How do you use this special memory ?

> > This is especially true as I don't se how the driver is going to
> > determine how many buffers it will have to let allocate.
> 
> Well, this is the same problem as determining when to fail mmap(). There
> has to be a check of some kind otherwise an app could allocate locked
> memory until the whole system freezes or crashes.
> 

Yeah, driver has to fail mmap() after, let say 50 buffers were
allocated.

> > Number of buffers that driver will be able to allocate depend on the data
> > format and size, so app must make sure those are setup beforehand.
> 
> Yes. I should have stated that.
> 
> > > To allocate the buffers call
> > > VIDIOC_QUERYBUF for each buffer to get the details about the buffer,
> > > and call mmap() to allocate and map the buffer.
> >
> > Too complicated. Just start calling mmap() in a loop.
> 
> VIDIOC_QUERYBUF gives you the parameters you need to pass to mmap().
> 

This should be implicit. Each call to mmap() allocates a separate
buffer, so lenght is always equal buffer size and offset is always zero.

> > > VIDIOC_QUERYBUF takes
> > > a struct video_buffer object with the index field filled in to
> > > indicate which buffer is being queried.
> > Are you saying that buffers can be different?
> > Is that's what index is for ?
> 
> I'm saying there's more than one. What's relevant here is the 'offset'
> field which will be different on each one, and which you need to pass to
> mmap().
> 

See above.

> > > To do the capturing, call VIDIOC_QBUF to enqueue the buffers you want
> > > to be filled. This ioctl takes a struct video_buffer with the index
> > > field filled in to indicate which buffer to queue.
> >
> > I think the index just complicates things...
> 
> We *could* queue buffers in implicit numerical order. Yes, that's not a
> bad idea! (I was thinking of the existing VIDIOCMCAPTURE ioctl which
> takes a buffer number, and the VfW ADDBUFFER message which also
> explicitly indicates the buffer by a buffer header pointer, but it's not
> needed for a queue that's entirely driver-managed!)
> 

Order of buffers is defined by the order in which app sends them down to
the driver.

> > > The driver will begin filling the buffers with frame data. Only
> > > buffers that have been queued will be filled.
> >
> > There shouldn't be any other buffers, but queued ones.
> 
> Short answer: A buffer is non-queued while the app is reading it.
> 
My take: buffer is removed from internal driver queue when interrupt for
that buffer comes.

> Let me tell you what I was thinking when I wrote that. There are three
> kinds of buffers.
> 1. Buffers in the capture queue waiting to be filled with data.
> 2. Buffers in the done queue waiting for the app to read the data out.
> 3. Buffers that have been removed from the done queue, but have not been
> placed back on the capture queue.

> Initially all buffers are the third kind. _QBUF puts buffers in the
> capture queue. The driver/interrupt moves them from the capture queue to
> the done queue. _DQBUF removes them from the done queue. After _QBUF and
> before _DQBUF is when the driver can read out the data.
> 
> > Once a buffer is filled,
> > > it will not be filled again until it has been explicitly dequeued and
> > > requeued by the application.
> >
> > I say filled up buffer is dequeued implicitly.
> 
> Huh? Then it's non-queued.
> 

Maybe I'm thinking in terms of Windows here, but what I mean is that
when a buffer is filled with data, driver signals to app somehow that a
buffer is done. App knows what buffer it is, because it knows the order
in which it sent buffers to the driver. So there is no need to have a
'done' queue anywhere, it's implicit.
As far as driver is concerned, it has a queue of buffers waiting to be
captured to only.

> > The application can sleep until the next
> > > frame is done by calling VIDIOC_NEXTBUF, or select(). The two are
> > > equivalent. VIDIOC_NEXTBUF has no parameter. If no buffers are done
> > > then VIDIOC_NEXTBUF/select() will block until a buffer is done. If
> > > there is(are) already a buffer(s) done, then VIDIOC_NEXTBUF/select()
> > > will return immediately. It is not possible to wait on a specific
> > > buffer if there is more than one buffer queued. Call VIDIOC_DQBUF to
> > > dequeue the next ready buffer.
> >
> > Extra work. All this should be implicit.
> 
> I'm not sure what you're saying should be implicit. The app should sleep
> until a buffer is ready.
> 

yes, that's what it is. That comment was meant for some other place,
probably :)

> > After all, buffer number isn't important.
> 
> The buffer number is needed to read out the data. You have to know which
> buffer to read from. 

But you do know that implicitly; the order in which you read out the
data is defined by the order in which buffers were sent to the driver.

Each call to mmap() returns a pointer. The app
> would store those in an array or some sort. 

Exactly.

The index is the index into
> that array.
> 

Not needed. Always start from zero, increment each time buffer is read,
then modulo number of buffers.

> >  VIDIOC_DQBUF takes a struct
> > > video_buffer objec. The driver will fill in all the fields. It is not
> > > possible to dequeue a specific buffer; buffers are always dequeued in
> > > the order in which they were captured.
> >
> > One more reason for the implicit dequeueing.
> 
> I don't understand what you mean by implicit dequeueing.
> 

Hope I explained myself by now :)

> > > An application can call VIDIOC_QUERYBUF at any time for any buffer,
> > > and the driver will return the current status of the buffer.
> >
> > Not sure what this is for. You always know that at least one buffer is
> > being captured to. Others are waiting in line.
> 
> This already exists for when you are mmap()ing buffers. I was just
> mentioning that the ioctl always works. You probably don't need it
> during capture. Maybe for debugging? There's no additional code in the
> driver for this.
> 
> > > You can dynamically throttle the capture frame rate by only queueing
> > > buffers at the rate you want to capture.
> >
> > yep.
> 
> Clearly, explicit queueing is needed for that.
> 

Yes, but *de*queueing is implicit.

> > > Call VIDIOC_STREAM with the value of 0 to turn off streaming. If any
> > > buffers are queued for capture when streaming is turned off, they
> > > remain in the queue.
> >
> > It might be cleaner/easier for the driver to unqueue all outstanding
> > buffers.
> 
> There will be unread buffers that need to be read. The driver must allow
> that.

So the driver can signal all the outstanding buffers putting zero in
data size field, so app knows buffers are empty.


Alex
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] streaming capture
From:       Gerd Knorr < kraxel () goldbach ! isdn ! cs ! tu-berlin ! de>
Date:       1998-07-28 19:51:58

In lists.linux.video4linux you write:

>> Short answer: A buffer is non-queued while the app is reading it.
>> 
>My take: buffer is removed from internal driver queue when interrupt for
>that buffer comes.

It is'nt that simple.  You still need some bookkeeping for select() etc.

>Maybe I'm thinking in terms of Windows here, but what I mean is that
>when a buffer is filled with data, driver signals to app somehow that a
>buffer is done.

It would'nt work, unless the app already waits for the buffer in a
blocking system call.  There are no kernel-to-userspace callbacks (or
whatever windows uses to signal the application).

   Gerd

-- 
Gerd Knorr < kraxel@cs.tu-berlin.de>
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] Driver capabilities
From:       Bill Dirks <dirks () rendition ! com>
Date:       1998-07-29 3:23:41

Alex Bakaev wrote:
> Bill Dirks wrote:
> > Alex Bakaev wrote:
> > > > VID_TYPE_CAPTURE     Can capture frames via the read() call
> > > > VID_TYPE_STREAMING   Can capture frames asynchronously into
> > > >                      pre-allocated buffers
> > > > VID_TYPE_FRAMEBUF    Can capture directly into compatible graphics
> > > This shouldn't be here. Capture driver shouldn't concern itself where
> > > data is going.
> > The driver concerns itself with where the data is going for many
> > reasons:
> [snip]
> > 3. The final destination may not be known before the capture is
> > initiated.
> ? You are not suggesting we start capture with no buffers ? Even if
> you do, what does it change as far as driver is concerned ? Driver
> just needs a virtual address.

And where do suppose that address comes from?

The driver will capture to a temp buffer to support non-blocking read()
and select() with read() (and optionally to reduce the amount of time
spent blocked in read() for normal blocking read()s without select()).
For DMA devices this will typically be a system memory buffer allocated
by the driver. For non-DMA devices this will typically be on-board frame
buffer memory.

> > Each of the above capture modes are distinctly different.
> 
> Not really.

(sigh)

1. VID_TYPE_CAPTURE
The capture is to a user-space buffer that is passed in to the read()
function. The data must be written to the user-space buffer before the
function can return (or return an error code). The buffer address is not
known to the driver before the call to read(). The next call to read()
will have a different buffer address (in general).

2. VID_TYPE_STREAMING
The capture is to driver-allocated buffers in locked kernel space, or
maybe onboard memory that is mapped to the processor space. There are
several buffers, pre-allocated, and mmap()ed to the client address
space. The buffers are cycled in driver-managed queues. The client reads
the buffers through pointers it got when they were mmap()ed. There is a
handshaking protocol with the client so the client can control the
buffer queuing, sync with the capture, find out when buffers are filled,
and read out the data in a way that's guaranteed to be safe.

3. VID_TYPE_FRAMEBUF
This is automatic preview. Can potentially run concurrently with the
above capture modes, but if so, at lower priority. Capture is to a
single locked buffer that stays the same as long as frame buffer preview
is on. Capture is to a bus address [or virtual address that can be
converted to a bus address, if it's possible] that is really on a
graphics card. The address comes from the X Windows server (done right).
The image format matches the graphics frame buffer format, which is a
different format from the capture format used by the other modes. The
app never sees the data.


There's no way to put all that under one umbrella.

Inside the driver, at some lower level, they all might (depends on the
hardware!) be implemented in pretty much the same way, use pretty much
the same microcode, etc. From the outside the semantics are very
different.

> At least I didn't see any difference developing Windows
> bt848 drivers.

All that means is the Bt848 is a one-trick pony. Other hardware might
use different tricks for different things.

> > > Even if a capture HW doesn't support 'directly' ( DMA )
> > > capturing into gfx device, passing a non-DMA capture driver
> > > address of the frame buffer would work just fine.
> Not sure why ( what is *bus* address anyway ? ). There may not be a way
> to convert a virtual address to physical address ? Then OS must be
> updated.

I'm not so sure bus addresses can always be converted to virtual
addresses. Can a Linux expert comment?

> > > Where is the video formats capability reported?
> > Undefined. It would be nice if there were something, but usually
> > the app knows what it wants, or has a short list of formats
> 
> Or driver reports its color formats to the app and app displays
> them for a user to select one.

Yes. That is a very good reason to have format enumeration. For this you
would want for each format:
1. PIX_FMT_* code.
2. Short name (for drop-down list box)
3. Description (for a helpful description line)
4. Whether it's a compressed format or not (to have a Quality Factor
slider)
[5. Maybe min and max dimensions.]


Bill.
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] streaming capture
From:       Bill Dirks <dirks () rendition ! com>
Date:       1998-07-29 4:38:59

Alex Bakaev wrote:
> Bill Dirks wrote:
> > > I say just let app call mmap() as many times as it wants, or until it
> > > fails.
> > 1. You can't call mmap() yet because you don't know what size to ask
> > for.
> Size is defined by the image dimentions and color format.

No. These are driver-managed buffers. The size is determined by the
driver. The size of a capture buffer will most likely need to be rounded
up to the nearest multiple of the system page size. Plus I want to have
an extensible, general-purpose mmap()ed memory area allocation protocol
that can be used for any type of mapped memory area that the driver or
device might want to support (and I'm thinking all types of video
devices). It makes sense to use it for the memory-mapped capture
buffers.

> > 3. I also wanted to have buffer attribute flags. I want to have the
> > flexibility of asking for buffer types because hardware could have
> > special memory on board, or semantics or something.
> 
> How do you use this special memory ?

I just want something that's extensible. I imagine video compressing or
video processing cards could have all kinds of mappable memory on them.
I hope I'm laying down a foundation that can be built upon by many
future devices.

> Yeah, driver has to fail mmap() after, let say 50 buffers were
> allocated.

This is definately an open question. I hope a Linux expert can help.
Anyway it's a driver implementation issue, and not spec'ed.

> > > > To allocate the buffers call
> > > > VIDIOC_QUERYBUF for each buffer to get the details about the buffer,
> > > > and call mmap() to allocate and map the buffer.
> > >
> > > Too complicated. Just start calling mmap() in a loop.

Only two ioclts, and request and a return-the-info. You do need the
feedback.

> > VIDIOC_QUERYBUF gives you the parameters you need to pass to mmap().
> This should be implicit. Each call to mmap() allocates a separate
> buffer, so lenght is always equal buffer size and offset is always
> zero.

I want to do it with a general-purpose mappable-memory request thingee.
Length has to come from the driver. Offset has to be different for each
one otherwise you're allocating the same buffer over and over!

> > I'm saying there's more than one. What's relevant here is the 'offset'
> > field which will be different on each one, and which you need to pass to
> > mmap().
> See above.

See above. :-)

> [bunch of confusion about streaming deleted]
> 
> Maybe I'm thinking in terms of Windows here,

Ah! Of course! I don't know why I didn't see it before. You're
describing the VfW streaming paradigm. We can't do it that way
here. I'll explain. -->

> but what I mean is that when a buffer is filled with data, driver
> signals to app somehow that a buffer is done.

There are two things we can't do in Linux we could in Windows. The above
is one. The other is that we can't DMA to user-allocated memory (I want
something to use with 2.2).

We can't DMA to user memory, so the driver has to allocate and keep all
buffers. To access the data, the client mmap()s the buffers to its
memory space. In VfW the client sends buffers to the driver to be
queued. Here the buffer is always in the driver, so there is no sending
of buffers. The client indicates when a buffer should be queued with
VIDIOC_QBUF, which takes no parameter, and the driver queues one of the
buffers. The buffers are all alike, and it doesn't matter which buffer
is queued, so the driver decides which buffer to queue. Queue order is
not spec'ed. It doesn't need to be, and actually *shouldn't* be, read
on....

Capturing is like VfW. Data goes into the buffer at the head of the
capture queue. When done the buffer is automatically removed from the
capture queue.

Now here's where the no-signal-to-the-client issue comes in. In VfW the
driver makes a callback which effectively interrupts the client app with
the new frame. There's nothing like that here, so the client must
explicitly call the driver to fetch the buffer. But the client may be
stuck doing something else, and the driver can't interrupt it, and it
could be any amount of time (multiple frame-times) before the client
checks for full buffers, so the filled buffers have to go into a done
queue inside the driver. When the client is finally ready to get some
more data, it calls VIDIOC_DQBUF which dequeues a buffer off the done
queue.

The client now reads the data from the buffer. The client takes care not
to call VIDIOC_QBUF until it has finished reading. That guarantees that
the buffer will not be overwritten before the client is done reading it.


> App knows what buffer it is, because it knows the order
> in which it sent buffers to the driver.
> > The buffer number is needed to read out the data. You have to
> > know which buffer to read from.
> But you do know that implicitly; the order in which you read out the
> data is defined by the order in which buffers were sent to the driver.
> > Each call to mmap() returns a pointer. The app
> > would store those in an array or some sort.
> Exactly.
> > The index is the index into that array.
> Not needed. Always start from zero, increment each time buffer
> is read, then modulo number of buffers.

No! You would have the driver with it's buffer index counter, and the
client also trying to recreate the same counter. That makes two counters
doing the same thing with no feedback between them, which if they got
out of sync you'd be screwed and have no way to recover, or even know
about it. It's bad design, and I refuse to spec that. There must be one
master and the other a slave, always guaranteed synced.

In VfW the driver passes the LPVIDEOHDR back to the client to explicitly
indicate which buffer has been dequeued.

We will also do it explicitly. VIDIOC_DQBUF will fill in a struct
video_buffer, which is a lot like the VfW VIDEOHDR structure. Serves the
same purpose. VIDEOHDR has a pointer to the actual buffer data, struct
video_buffer has an index to indirectly indicate that pointer. (In other
words the buffer index counter is passed from the driver to the client
on each frame. Much better.)

It also simplifies the spec because I don't have to spec the queue
order, and simplifies the app design because it doesn't have to try to 
recreate internal driver variables.


> Hope I explained myself by now :)

Finally figured out what you were saying! I hope *I*'ve explained
*myself*! :)

> > There will be unread buffers that need to be read. The driver
> > must allow that.
> So the driver can signal all the outstanding buffers putting zero in
> data size field, so app knows buffers are empty.

Misunderstanding here, my fault. I want the app to be able to read the
remaining filled buffers. Otherwise there are frames missing from the
end of the movie. I've modified the spec for this already.

Bill.
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] Driver capabilities
From:       alexb () jetsuite ! com (Alex Bakaev)
Date:       1998-07-29 16:36:09

Bill,

I agree that from outside these 3 modes look different. The biggest
difference is with read(), I suppose; that may change somewhat when
capability to DMA into user buffers will be added.

Let me ask again: does it really make sense to treat a video capture
device like a file ? Certainly read() can be treated as a special case
of streaming capture. What would happen if read() wasn't supported ?

As for frame buffer capture, I think a single flag to say 'don't unqueue
buffers' will be enough to make it look exactly like streaming capture.
In my Win95 driver overlay 'capture' didn't even generate interrupts.
Why so much opposition to getting rid of VID_TYPE_FRAMEBUF ?

Alex

Bill Dirks wrote:
> 
> Alex Bakaev wrote:
> > Bill Dirks wrote:
> > > Alex Bakaev wrote:
> > > > > VID_TYPE_CAPTURE     Can capture frames via the read() call
> > > > > VID_TYPE_STREAMING   Can capture frames asynchronously into
> > > > >                      pre-allocated buffers
> > > > > VID_TYPE_FRAMEBUF    Can capture directly into compatible graphics
> > > > This shouldn't be here. Capture driver shouldn't concern itself where
> > > > data is going.
> > > The driver concerns itself with where the data is going for many
> > > reasons:
> > [snip]
> > > 3. The final destination may not be known before the capture is
> > > initiated.
> > ? You are not suggesting we start capture with no buffers ? Even if
> > you do, what does it change as far as driver is concerned ? Driver
> > just needs a virtual address.
> 
> And where do suppose that address comes from?
> 
> The driver will capture to a temp buffer to support non-blocking read()
> and select() with read() (and optionally to reduce the amount of time
> spent blocked in read() for normal blocking read()s without select()).
> For DMA devices this will typically be a system memory buffer allocated
> by the driver. For non-DMA devices this will typically be on-board frame
> buffer memory.
> 
> > > Each of the above capture modes are distinctly different.
> >
> > Not really.
> 
> (sigh)
> 
> 1. VID_TYPE_CAPTURE
> The capture is to a user-space buffer that is passed in to the read()
> function. The data must be written to the user-space buffer before the
> function can return (or return an error code). The buffer address is not
> known to the driver before the call to read(). The next call to read()
> will have a different buffer address (in general).
> 
> 2. VID_TYPE_STREAMING
> The capture is to driver-allocated buffers in locked kernel space, or
> maybe onboard memory that is mapped to the processor space. There are
> several buffers, pre-allocated, and mmap()ed to the client address
> space. The buffers are cycled in driver-managed queues. The client reads
> the buffers through pointers it got when they were mmap()ed. There is a
> handshaking protocol with the client so the client can control the
> buffer queuing, sync with the capture, find out when buffers are filled,
> and read out the data in a way that's guaranteed to be safe.
> 
> 3. VID_TYPE_FRAMEBUF
> This is automatic preview. Can potentially run concurrently with the
> above capture modes, but if so, at lower priority. Capture is to a
> single locked buffer that stays the same as long as frame buffer preview
> is on. Capture is to a bus address [or virtual address that can be
> converted to a bus address, if it's possible] that is really on a
> graphics card. The address comes from the X Windows server (done right).
> The image format matches the graphics frame buffer format, which is a
> different format from the capture format used by the other modes. The
> app never sees the data.
> 
> There's no way to put all that under one umbrella.
> 
> Inside the driver, at some lower level, they all might (depends on the
> hardware!) be implemented in pretty much the same way, use pretty much
> the same microcode, etc. From the outside the semantics are very
> different.
> 
> > At least I didn't see any difference developing Windows
> > bt848 drivers.
> 
> All that means is the Bt848 is a one-trick pony. Other hardware might
> use different tricks for different things.
> 
> > > > Even if a capture HW doesn't support 'directly' ( DMA )
> > > > capturing into gfx device, passing a non-DMA capture driver
> > > > address of the frame buffer would work just fine.
> > Not sure why ( what is *bus* address anyway ? ). There may not be a way
> > to convert a virtual address to physical address ? Then OS must be
> > updated.
> 
> I'm not so sure bus addresses can always be converted to virtual
> addresses. Can a Linux expert comment?
> 
> > > > Where is the video formats capability reported?
> > > Undefined. It would be nice if there were something, but usually
> > > the app knows what it wants, or has a short list of formats
> >
> > Or driver reports its color formats to the app and app displays
> > them for a user to select one.
> 
> Yes. That is a very good reason to have format enumeration. For this you
> would want for each format:
> 1. PIX_FMT_* code.
> 2. Short name (for drop-down list box)
> 3. Description (for a helpful description line)
> 4. Whether it's a compressed format or not (to have a Quality Factor
> slider)
> [5. Maybe min and max dimensions.]
> 
> Bill.
> ------------
> To unsubscribe from this list send mail to majordomo@phunk.org with the
> line "unsubscribe video4linux" without the quotes in the body of the
> message.
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] streaming capture
From:       alexb () jetsuite ! com (Alex Bakaev)
Date:       1998-07-29 17:40:03

Bill,

my comments inline

Bill Dirks wrote:
> 
> Alex Bakaev wrote:
> > Bill Dirks wrote:
> > > > I say just let app call mmap() as many times as it wants, or until it
> > > > fails.
> > > 1. You can't call mmap() yet because you don't know what size to ask
> > > for.
> > Size is defined by the image dimentions and color format.
> 
> No. These are driver-managed buffers. The size is determined by the
> driver. The size of a capture buffer will most likely need to be rounded
> up to the nearest multiple of the system page size. 

This is fine. User may not know that actual buffer is larger.

Plus I want to have
> an extensible, general-purpose mmap()ed memory area allocation protocol
> that can be used for any type of mapped memory area that the driver or
> device might want to support (and I'm thinking all types of video
> devices). It makes sense to use it for the memory-mapped capture
> buffers.
> 

What can be more general than an app just calling mmap() any time it
wants. Keep in mind that separate chunk of memory is allocated by the
driver for each mmap() call. There is no one big pool from which mmap()
gets its pointers.

> > > 3. I also wanted to have buffer attribute flags. I want to have the
> > > flexibility of asking for buffer types because hardware could have
> > > special memory on board, or semantics or something.
> >
> > How do you use this special memory ?
> 
> I just want something that's extensible. I imagine video compressing or
> video processing cards could have all kinds of mappable memory on them.
> I hope I'm laying down a foundation that can be built upon by many
> future devices.
> 

O.K.

> > Yeah, driver has to fail mmap() after, let say 50 buffers were
> > allocated.
> 
> This is definately an open question. I hope a Linux expert can help.
> Anyway it's a driver implementation issue, and not spec'ed.
> 

Fine.

> > > > > To allocate the buffers call
> > > > > VIDIOC_QUERYBUF for each buffer to get the details about the buffer,
> > > > > and call mmap() to allocate and map the buffer.
> > > >
> > > > Too complicated. Just start calling mmap() in a loop.
> 
> Only two ioclts, and request and a return-the-info. You do need the
> feedback.
> 

But app already set the image parameters. It knows everything about data
buffers ( except maybe a fact that actual buffer size was rounded up ).

> > > VIDIOC_QUERYBUF gives you the parameters you need to pass to mmap().
> > This should be implicit. Each call to mmap() allocates a separate
> > buffer, so lenght is always equal buffer size and offset is always
> > zero.
> 
> I want to do it with a general-purpose mappable-memory request thingee.
> Length has to come from the driver. Offset has to be different for each
> one otherwise you're allocating the same buffer over and over!
> 
No. Each mmap() causes a new kernel buffer to be allocated. Besides this
being simpler, I think it may be better for the system as no singular
big chunks are allocated ( but this is just my guess on how mm works on
Linux ).

> > > I'm saying there's more than one. What's relevant here is the 'offset'
> > > field which will be different on each one, and which you need to pass to
> > > mmap().
> > See above.
> 
> See above. :-)
> 

See above :_)

> > [bunch of confusion about streaming deleted]
> >
> > Maybe I'm thinking in terms of Windows here,
> 
> Ah! Of course! I don't know why I didn't see it before. You're
> describing the VfW streaming paradigm. We can't do it that way
> here. I'll explain. -->
> 

I think we can :)

> > but what I mean is that when a buffer is filled with data, driver
> > signals to app somehow that a buffer is done.
> 
> There are two things we can't do in Linux we could in Windows. The above
> is one. The other is that we can't DMA to user-allocated memory (I want
> something to use with 2.2).
> 

These two don't prevent us from doing what I'm describing. First is
solved by driver keeping 'done' queue. Second is solved by the driver
allocating buffers ( much like VfW has a mode where driver allocates the
buffer; at early stages in my Win95 driver I was using this method to
allocate physically continuous memory ).

> We can't DMA to user memory, so the driver has to allocate and keep all
> buffers. 

Key here is allocate. It sertainly doesn't have to keep them. Maximum
that's required ( depending on how mm works on Linux ) is an array of
pointers to allocated *kernel* ( not mmaped ) buffers. If it is possible
to obtain kernel address from mmap()'ed address, then even that array is
not needed.


To access the data, the client mmap()s the buffers to its
> memory space. In VfW the client sends buffers to the driver to be
> queued. Here the buffer is always in the driver, so there is no sending
> of buffers. 

My approach is to use the driver to allocate buffers, but not manage
them. And queueing is done by the app sending the buffer address. You
want to let driver decide what buffer to queue and pass address to the
app. I think both approaches are fine, it's just somewhat unusual for
driver to supply buffers to the app.

The client indicates when a buffer should be queued with
> VIDIOC_QBUF, which takes no parameter, and the driver queues one of the
> buffers. The buffers are all alike, and it doesn't matter which buffer
> is queued, so the driver decides which buffer to queue. Queue order is
> not spec'ed. It doesn't need to be, and actually *shouldn't* be, read
> on....
> 

> Capturing is like VfW. Data goes into the buffer at the head of the
> capture queue. When done the buffer is automatically removed from the
> capture queue.
> 
> Now here's where the no-signal-to-the-client issue comes in. In VfW the
> driver makes a callback which effectively interrupts the client app with
> the new frame. There's nothing like that here, so the client must
> explicitly call the driver to fetch the buffer. 

Can't an app sit in a thread in signal() and wake up when buffer is done
? Or sit in a thread looking at the buffer header until 'done' flag is
set ? That's how I'd do it. This works especially well with my scheme
because driver doesn't have to pass the buffer address to the app.
Buffer order is implicit, defined by the order in which app send buffers
to the driver.

But the client may be
> stuck doing something else, and the driver can't interrupt it, and it
> could be any amount of time (multiple frame-times) before the client
> checks for full buffers, so the filled buffers have to go into a done
> queue inside the driver. 

In my case there is no need for the 'done' queue. Driver sets 'done'
flag, app will see it.


When the client is finally ready to get some
> more data, it calls VIDIOC_DQBUF which dequeues a buffer off the done
> queue.
> 

No need for that in my case.

> The client now reads the data from the buffer. The client takes care not
> to call VIDIOC_QBUF until it has finished reading. That guarantees that
> the buffer will not be overwritten before the client is done reading it.
> 
Same applies to my case.

> > App knows what buffer it is, because it knows the order
> > in which it sent buffers to the driver.
> > > The buffer number is needed to read out the data. You have to
> > > know which buffer to read from.
> > But you do know that implicitly; the order in which you read out the
> > data is defined by the order in which buffers were sent to the driver.
> > > Each call to mmap() returns a pointer. The app
> > > would store those in an array or some sort.
> > Exactly.
> > > The index is the index into that array.
> > Not needed. Always start from zero, increment each time buffer
> > is read, then modulo number of buffers.
> 
> No! You would have the driver with it's buffer index counter, and the
> client also trying to recreate the same counter. 

I never said driver has to have an index. I was saying the opposite -
app should have it only. Driver has a queue of buffers that is filled by
buffers passed by the app. I can see that in your case that queue may be
the array of allocated kernel buffers. This can save same memory,
perhaps.

That makes two counters
> doing the same thing with no feedback between them, which if they got
> out of sync you'd be screwed and have no way to recover, or even know
> about it. It's bad design, and I refuse to spec that. There must be one
> master and the other a slave, always guaranteed synced.
> 

Again, I never meant to have 2 indexes. All I meant was that order in
which buffers are sent by the app and order in which app must read
'done' buffers is 100% defined.

> In VfW the driver passes the LPVIDEOHDR back to the client to explicitly
> indicate which buffer has been dequeued.
> 

Internally in my drivers ( 16 bit and a VxD ) I relied on implicit bufer
ordering when sending data pointer to the VxD. VxD just called back into
16 bit driver to say 'buffer is ready'. 16 bit driver knew what buffer
was ready. As a matter of fact, due to the latencies more than one
buffer could be ready.

> We will also do it explicitly. VIDIOC_DQBUF will fill in a struct
> video_buffer, which is a lot like the VfW VIDEOHDR structure. Serves the
> same purpose. VIDEOHDR has a pointer to the actual buffer data, struct
> video_buffer has an index to indirectly indicate that pointer. (In other
> words the buffer index counter is passed from the driver to the client
> on each frame. Much better.)
> 

In my case app just advances the index into its array of pointers
obtained form mmap().

> It also simplifies the spec because I don't have to spec the queue
> order, and simplifies the app design because it doesn't have to try to
> recreate internal driver variables.
> 

In my case there is no internal driver variables. App controls the order
in which buffers are queued up by the driver.

> > Hope I explained myself by now :)
> 
> Finally figured out what you were saying! I hope *I*'ve explained
> *myself*! :)
> 

Do you still feel the same way ? :)

[snip]
> Misunderstanding here, my fault. I want the app to be able to read the
> remaining filled buffers. Otherwise there are frames missing from the
> end of the movie. I've modified the spec for this already.
> 

I think this can be done with my approach ( by looking at the 'done'
flag )

Alex
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] streaming capture
From:       Aaron Colwell <colwaar () charlie ! cns ! iit ! edu>
Date:       1998-07-29 19:15:27


Alex, Bill & Gerd,

 What are you backgrounds in Linux programming? It seems like you are
arguing about how stuff works in Windows and it seems like you know much
about how linux drivers do things. I suggest that you look at the Kernel
Hackers Guide (http://www.linuxhq.com/guides/KHG/HyperNews/get/khg.html).
Perhaps that will help answer some of the questions about what you can and
cannot do through device drivers in Linux. The book " Linux Device
Drivers" is also a great source for information. I dont mean to say that
you dont know what you are talking about, but perhaps it will clear up a
bunch of the questions you seem to have about read, select, mmap, unique
"handles" for opening the device, opening a device multiple times. If you
have any questions about Linux stuff I can try to answer them, but I am
sort of tire or reading these "my Windoz driver is better messages." I
welcome all of the input that is being put on the list, but if there are
questions about how things work, they need to be asked and not delegated
to "well this is how it is done in Windoz." This is just my opinion.

Aaron Colwell

------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    Re: [video4linux] streaming capture
From:       alexb () jetsuite ! com (Alex Bakaev)
Date:       1998-07-29 19:32:18

Aaron,

I appreciate the pointer. I think it's going to be quite useful to me as
I'm just starting with Linux.

I'm sorry you got impression discussion was about who's Windows driver
is better. That certainly wasn't my intention. Maybe it's just that it
doesn't hurt looking at what's been done elswere ? Certainly video
capture in Windows has been around much longer.

Regards,
Alex

Aaron Colwell wrote:
> 
> Alex, Bill & Gerd,
> 
>  What are you backgrounds in Linux programming? It seems like you are
> arguing about how stuff works in Windows and it seems like you know much
> about how linux drivers do things. I suggest that you look at the Kernel
> Hackers Guide (http://www.linuxhq.com/guides/KHG/HyperNews/get/khg.html).
> Perhaps that will help answer some of the questions about what you can and
> cannot do through device drivers in Linux. The book " Linux Device
> Drivers" is also a great source for information. I dont mean to say that
> you dont know what you are talking about, but perhaps it will clear up a
> bunch of the questions you seem to have about read, select, mmap, unique
> "handles" for opening the device, opening a device multiple times. If you
> have any questions about Linux stuff I can try to answer them, but I am
> sort of tire or reading these "my Windoz driver is better messages." I
> welcome all of the input that is being put on the list, but if there are
> questions about how things work, they need to be asked and not delegated
> to "well this is how it is done in Windoz." This is just my opinion.
> 
> Aaron Colwell
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    [video4linux] End of Bill/Alex discussion(?)
From:       Bill Dirks < dirks () rendition ! com>
Date:       1998-07-30 2:26:46

Apologies to those annoyed by the recent overabundance of messages.
I'll bring the last bits of the discussion together here.

Bill.

---  [CAPTURE, STREAM, FRAMEBUF]
Alex Bakaev wrote:
> I agree that from outside these 3 modes look different.

That's sufficient reason to keep them separate.

> The biggest difference is with read(),

The biggest difference is with framebuf, since it's a display operation,
and the others are capture operations. It can be implemented with
overlay or genlock devices that don't have capture.

> Let me ask again: does it really make sense to treat a video capture
> device like a file ?

That's the Unix design philosophy. Everything's a file. That's how Unix
works, that's how it operates. It's neat. :-)
# cat /dev/video0 | filter | showimages

---  [Memory mapped buffers]
>What can be more general than an app just calling mmap() any time it
>wants. Keep in mind that separate chunk of memory is allocated by the
>driver for each mmap() call.

Buffers can exist previously, might be on the device, etc. There is also
the possibility the buffer is allocated on mmap(). :-)

Bill> 'Offset' has to be different for each one otherwise you're
Bill> allocating the same buffer over and over
> No. Each mmap() causes a new kernel buffer to be allocated.

For our purpose, a mmap() call will need to specify which buffer is
to be mapped. The offset parameter indicates that. The driver will fill
in offset in VIDIOC_QUERYBUF. The app passes that value back in mmap().
The driver will know what the "offset" value means and map the
appropriate buffer.

---  [queuing stream capture buffers]
>And queueing is done by the app sending the buffer address.

Sending a user-space address to a driver to be queued is nonsensical.

Don't try to put Windows' design on Linux. It's the wrong approach.
Would you have suggested passing pointers like that if you had never
seen VfW?

I tried to start from how Linux works and design something that fits. I
am not a Linux expert. I've written one video4linux driver--big deal. I
wish more experienced Linux programmers would comment. Please?

Bill.
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

List:       linux-video
Subject:    [video4linux] Aaron says 'enough'
From:       Bill Dirks <dirks () rendition ! com>
Date:       1998-07-30 4:28:50

Aaron Colwell wrote:
> Alex, Bill & Gerd,
>  What are you backgrounds in Linux programming?

Since I've been proposing dramatic spec changes, I think should answer
that.

I have written a video4linux driver for the Winnov Videum video capture
card, both PCI and ISA versions. On the PCI card the driver supports
shared interrupts and bus-mastering. The buffer for bus-mastering is
vmalloc()ed and it traverses the page tables to get the bus addresses of
the buffer's pages and builds the scatter list. The ISA card has no
interrupt so I use the timer to poll it. Capture is through the read()
routine only. Before read() returns a frame, it initiates the capture of
the next frame, so when read() is called again the next frame is ready
or almost ready. That reduces or eliminates time spent blocked in
read(), and greatly enhances application performance. Wrote a simple X
Windows test app that captures and paints the video on the screen using
XPutImage calls. I have both an ISA Videum and a PCI Videum in my
machine, and have been running both of them simultaneously and
continuously for days. The driver appears perfectly solid.

I have worked with Video for Windows for four years, and written or
worked on drivers, codecs, "ActiveX controls" and applications for video
capture. I have some exposure to the Windows kernel streaming
architecture introduced in Win98/NT5.

> It seems like you are arguing about how stuff works in Windows

Sometimes I explain things to Alex in terms of similar Windows
calls/structures because he understands them.

> and it seems like you [don't] know much
> about how linux drivers do things.

No, I don't. I said from the beginning that I'm new to Linux and will
need help. I don't know why I haven't gotten any more feedback from the
experienced Linux programmers, and I'm distressed about it. Is everybody
just ignoring or dismissing this effort? 

> I suggest that you look at the Kernel Hackers Guide

Thanks.

> The book " Linux Device Drivers" is also a great source

I have this. Wrote my driver with it. It has little errors/obsolete
things in it, but overall it's fantastic.

> questions you seem to have about read, select, mmap,
> unique "handles" for opening the device,
> opening a device multiple times.

The first three I think I understand, but have not used select or mmap
before. For the last I was hoping Alan Cox would make the appropriate
changes to videodev.c and explain to me how it worked. :-) :-)

Bill.
------------
To unsubscribe from this list send mail to majordomo@phunk.org with the
line "unsubscribe video4linux" without the quotes in the body of the
message.

			   USENET Archives


The materials and information included in this website may only be used
for purposes such as criticism, review, private study, scholarship, or 
research.


Electronic mail:			      WorldWideWeb:
   tech-insider@outlook.com		         http://tech-insider.org/