List: linux-video Subject: [video4linux] Starting a v4l driver From: Bill Dirks <dirks () rendition ! com> Date: 1998-06-23 18:39:42 [This is a repost. Sorry if this message appears twice. This is my first post to a mailing list, and I am trying to figure out how to do it. The Welcome message you get when you subscribe doesn't tell you how to post. I tried sending the message to video4linux@phunk.org, but it never appeared. I just re-subscribed, and the Welcome message has different email addresses in it than it did in May. Now I will try sending to video4linux@odin.appliedtheory.com...] Hello, Video for Linux people, Mr. Cox. I intend to write a v4l driver for the Winnov Videum AV/VO PCI capture card, and I have a few questions. I am a newcomer to Linux, so please bear with me if some of these questions have 'obvious' answers. First a little info: This card consists of an analog video decoder chip connected to a proprietary Winnov chip which can do image scaling, tone-control, compressing, etc. There is an SRAM frame buffer big enough to hold the whole image. Finally there is a PCI bus interface chip. All hardware registers are accessible through I/O ports. The PCI chip can copy data from the on-board SRAM via PCI bus mastering to system RAM. You can also read the SRAM in slave mode from the processor via I/O ports. I have access to all the technical specs for this card, and I have Winnov's permission to release the source for the completed Linux driver. I have worked on Windows drivers for this card. I am doing this project at home in my spare time for fun. My equipment: System: 166MHz K6, 32MB RAM, S3 ViRGE w/4MB, IDE CD-ROM, SB 16 Primary hard drive: 6GB IDE, FAT32 Windows OSR2 Secondary HD: 800MB IDE, ext2 Linux + a swap partition I bought the RedHat 5.0 Boxed Set, and patched the kernel up to 2.0.34 with patches I got from www.kernel.org. I understand that is the latest 'release' version of Linux. (It also gives me access to the OSR2 drive which is *really* convenient.) I also have: The _Linux_Device_Drivers_ book from O'Reilly (the one with the bronco on the cover) (I'm on Chapter 2 so far.) The draft Video for Linux spec from http://roadrunner.swansea.uk.linux.org/v4l.shtml (which I can't access right now, BTW!) Ok, the questions: 1. I am hoping the above is a suitable platform for my development work. Is it? 2. In what directory should I put the driver source? I figure either /usr/src/linux/drivers/misc or /usr/src/linux/drivers/v4l? 3. Where can I find the v4l header files that define the structures and symbols mentioned in the spec? 4. How do I register the driver? In other words how does an app find it? 5. Where can I find a simple app to test the driver? 6. I wonder if I can get some advice about what are the critical ioctls I should implement first? I figure I'll start with a slave mode implementation first. In which case the card is conceptually easy to operate: 0. Configure the card (I/O range, IRQ) on the PCI bus. 1. Program all the capture settings into it. 2. Give it the command to capture. It captures an image into the on-board RAM and produces an interrupt when it's done (or you can leave interrupts off and just poll the ISR register). 3. Read out the image through the I/O ports and put it in a buffer. 4. Convert data to RGB. (This card always gives you YUV data.) All of that I understand how to do in principle, but to do it in the context of a Linux driver is where I need help. I guess I'm just asking for any advice or guidance anyone can offer to help me get started. Thanks much. Bill. ------------ To unsubscribe from this list send mail to majordomo@phunk.org with the line "unsubscribe video4linux" without the quotes in the body of the message.
List: linux-video Subject: [video4linux] Re: Starting a v4l driver From: Bill Dirks < dirks () rendition ! com> Date: 1998-06-25 3:11:40 First off, thanks for the info, Alan. > No problem - its worth my time answering them if you then write the >driver not me \ > ;) I'll be happy to write the driver, sounds like a fun project. You just help me with the spec. :) > > proprietary Winnov chip which can do image scaling, tone-control, > > compressing, etc. There is an SRAM frame buffer big enough to hold > Some of that may need the API extending. We havent really addressed >chips that do \ > clever things with scaling internally yet Well, all that means is the hardware will reduce the image to the pixel dimensions (width and height) the app wants. > Right now they live in /usr/src/linux/drivers/char - as they are > "character devices" in Unix terms even though maybe not in the sense of > being terminal interfaces. It's a character device? So the pixels just come out one by one like you were reading a serial port or something? Is there any sense of frame identity, e.g. one-frame-per-buffer, or does the first pixel of the next frame just follow the last pixel of the previous frame with no indication that one frame has ended an another begun besides the count of the bytes that have been read? > You really need to grab 2.1.106 and read one of the other drivers. The > Pro Movie Studio driver may be the best place to start from - also >because its all \ > GPL - making a copy of pms.c and changing it to driver your card is encouraged \ > rather than bad ;) I will get these. > For basic testing things like "cat" are great since a read from the >device gives \ > you a frame to look at. Cat? That cracks me up. :) Won't cat just run and run, or will it take one frame and stop? > > 0. Configure the card (I/O range, IRQ) on the PCI bus. > The PCI bios will have done that on a PC. On none PC machines the boot > code will do it for you. I noticed the Winnov card listed in /proc/pci. So I just parse out the IRQ and I/O port and go with it? I don't have to set anything? > > 4. Convert data to RGB. (This card always gives you YUV data.) > Nothing requires you do that, you can simply support YUV422 capture > only if you wish. That will confuse some of the existing tv apps > but that needs fixing anyway... Well, I will support the common RGB formats so the most apps can use it. I think you should add VIDEO_PALETTE_YUYV YUV 4:2:2, 8 bits per component, Y0-U-Y1-V byte order VIDEO_PALETTE_YUV420 YUV 4:2:0 planar, Y-plane U-plane V-plane, U and V planes are 1/2 the width and 1/2 the height of the image The first is what the Winnov board supports natively. Also most overlay graphics cards support YUYV as an overlay format. The second is the input to most common video compression standards, including the H.320, H.323, and H.324 video conferencing standards. > I'll email you the pms driver to look at now (but not to the list 8)) I still haven't received any messages from the list since June 20, but I discovered I can read them from the archive site. Not ideal, but it will have to do. Maybe the problem is at my end. Bill. ------------ To unsubscribe from this list send mail to majordomo@phunk.org with the line "unsubscribe video4linux" without the quotes in the body of the message.
List: linux-video Subject: Re: [video4linux] Re: Starting a v4l driver From: Alan Cox <alan () cymru ! net> Date: 1998-06-25 10:51:35 > Well, all that means is the hardware will reduce the image to the pixel > dimensions (width and height) the app wants. That is covered. > It's a character device? So the pixels just come out one by one like you > were reading a serial port or something? Is there any sense of frame > identity, e.g. one-frame-per-buffer, or does the first pixel of the next > frame just follow the last pixel of the previous frame with no > indication that one frame has ended an another begun besides the count > of the bytes that have been read? Character doesnt mean "one char at a time". In the unix world it means anything which isnt structured block storage (ie a disk). > one frame and stop? It'll do several until stopped, but the viewers ignore "trailing garbage" ;) > I noticed the Winnov card listed in /proc/pci. So I just parse out the > IRQ and I/O port and go with it? I don't have to set anything? You can just use the PCI bios functions the kernel provides. They'll also work on the Alpha, Sparc64 etc. There are lots of examples of this in the kernel. > > I think you should add > VIDEO_PALETTE_YUYV YUV 4:2:2, 8 bits per component, Y0-U-Y1-V byte > order > VIDEO_PALETTE_YUV420 YUV 4:2:0 planar, Y-plane U-plane V-plane, U and V > planes are 1/2 the width and 1/2 the height of the image By all means > I still haven't received any messages from the list since June 20, but I > discovered I can read them from the archive site. Not ideal, but it will > have to do. Maybe the problem is at my end. Alan ------------ To unsubscribe from this list send mail to majordomo@phunk.org with the line "unsubscribe video4linux" without the quotes in the body of the message.
List: linux-video Subject: Re: [video4linux] Re: Starting a v4l driver From: Alan Cox < alan () cymru ! net> Date: 1998-06-25 11:15:50 > BTW when you have more than one processor you need more than two mmap > buffers - with a threaded app you can have both processors consuming a > full buffer each while the bt848 is producing the third. That way you > have 1/15 second to compress each frame instead of 1/30. Maybe even > generalize this to (#processors + 1) mmap buffers? I'm still trying to work out the "right" way to handle all of this. My guess is the number of buffers should depend on the frame size not be fixed. Alan ------------ To unsubscribe from this list send mail to majordomo@phunk.org with the line "unsubscribe video4linux" without the quotes in the body of the message.