Previous Next Contents

5. Having Problems?

5.1 How do I allow remote connections?

Due to a bug #97893 (specific to Linux) and #97889 (for Solaris, considered closed) in sqlexecd, which listens for remote clients and fires off a sqlexec process for these connections, when the sqlexec session terminates, it leaves a zombie in your process table (see What are these zombies in my process table? and How do I prevent them?). This caveat aside, if you still insist, do the following:

5.2 What are these zombies in my process table?

They are Informix's way of saying, "We take a licking and keep on ticking!" ;-) Seriously, they are the manifestation of Bug #97893. It seems to occur most often when using sockets (sesoctcp) to provide remote access to a database. There are also reports of it occurring when using unnamed pipes (seipcpip) on local connections.

In an interesting twist, Bug #97893 has been fixed in the glibc release, but a new bug, #101155 has been entered: the SEIPCPIP connection protocol (pipes) does not work on the RedHat 5.1 platform.

5.3 How do I prevent them?

Jonathan Leffler ( jleffler@informix.com) posted a work-around, nozombie.c, that is used in the the same manner as nohup. Jonathan's code and his remarks follow. Note that this is not an official (approved by Informix) fix, and there are also reports that it does not work in every case. YMMV.

The explanation is fairly simple -- if the process is ignoring the signal
SIGCHLD, then it doesn't accumulate zombie children.  The program sets the
signal handling mode for SIGCHLD to SIG_IGN and then runs whatever it was 
given as arguments.  If this happens to be sqlexecd, it seems to ignore the
SIGCHLD signals, thereby leaving no zombies around.

/*
@(#)File:            $RCSfile: nozombie.c,v $
@(#)Version:         $Revision: 1.1 $
@(#)Last changed:    $Date: 1998/08/20 21:24:40 $
@(#)Purpose:         Prevent process from accidentally creating zombies
@(#)Author:          J Leffler
@(#)Copyright:       (C) JLSS 1998
@(#)Product:         :PRODUCT:
*/
 
/*TABSTOP=4*/
 
#include <signal.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
 
#ifndef lint
static const char rcs[] = "@(#)$Id: nozombie.c,v 1.1 1998/08/20 21:24:40 jleffler Exp $";
#endif
 
/*
** Exec program specified by arguments with SIGCHLD signals ignored.
** This ensures that unless the program re-enables the SIGCHLD signal
** handling, it does not leave zombies around, even if it doesn't
** clean up behind its children.  This works on POSIX.1 systems (such
** as Solaris 2.6 and Linux) pretty straight-forwardly.
**
** Motivation: the initial version of sqlexecd 7.24.UC1 on Linux
** caused problems with lots of zombies.
**
**      nozombie $INFORMIXDIR/lib/sqlexecd [service]
*/
 
int main(int argc, char **argv)
{
    signal(SIGCHLD, SIG_IGN);
    execv(argv[1], &argv[1]);
    fprintf(stderr, "Failed to execv() %s\n", argv[1]);
    return EXIT_FAILURE;
}

If Jonathan's code doesn't solve it, try the following, which appeared recently in informix.idn.linux. It fixes the zombie problem by rewriting the way the signal function works.

First, create a signalfix.c as follows:


#include "signal.h"
#include <unistd.h>
#include <stdio.h>
 
void *signal(int signum,void (*handler)(int))
{
  struct sigaction sa;
  sa.sa_handler=handler;
  sa.sa_mask=SA_NOMASK;
  sa.sa_flags=SA_RESTART;
  sigaction(signum,&sa,(struct sigaction *)NULL);
}

Next, copy /usr/include/signal.h, and comment out the signal function. Then, compile signalfix.c thusly:

$ gcc -fpic -shared signalfix.c -o libsig.so

Finally, run sqlexecd with:

$ LD_PRELOAD=/root/sqlexecfix/libsig.so $INFORMIXDIR/lib/sqlexecd servername

5.4 DBACCESS seg faults in my xterm!

The immediate problem is that dbaccess apparently allocates a static buffer to hold the termcap/terminfo entry and that your entry is longer than this buffer will hold. One day Real Soon Now (c), I'll report this to Informix, hopefully in time to get it fixed in the next release cycle.

In the meantime, work-arounds include:

  1. Changing the $TERM environment variable to something other than xterm, such as linux, vt220 or vt100
  2. Modifying the relevant entry in termcap or terminfo (after making a backup copy, of course). dbaccess does not use the "ti" or "te" entries, so they can be deleted. This works, but it will affect all xterm sessions, include those that might actually use the ti/te entries.
  3. Clone the xterm entry as, say, xterm-dbaccess, delete the "ti" and "te" entries, and set the $TERM environment variable to xterm-dbaccess when you want to run dbaccess in an xterm window.
  4. Use the replacement termcap/terminfo file available at http://www.informix.com/idn-secure/Linux/WebPages/termcap.html. Note the warnings and suggestions listed there.

Roger Allen ( rja@sis.rpslmc.edu) recommends the third option, explaining

Either make a copy of the xterm entry in /etc/termcap with a different
name and use the new name as your TERM setting or change the current
entry.  Somewhere in an appendix of some of the Informix manuals is a
list of the fields that Informix uses.  I usually delete the ti and te
fields.  You also may be able to add the special Informix entries that
will enable color, line drawing characters, and more function keys, but
that is more for the Informix tools than DB-Access.

5.5 The port I'm supposed to use is already being used!

Use another one. The canonical Informix port is 1536, but it really doesn't care which socket it uses, so long as it is not in use by another service.

5.6 Can I use an NFS-mounted filesystem?

No. Rather than mounting the filesystem that holds your database via NFS,

  1. The remote host must be running sqlexecd, and
  2. You must access the remote database using the network

Non-Linux versions of SE enforce this in the binaries, but this may not be the case with the Linux product. As Jonathan Leffler advises, "expect problems if you try to cheat -- data corruption problems."

Whether this is enforced by the binaries or not, mounting your database on an NFS mount is a bad idea for (at least) two reasons:

  1. Access will be slow, because NFS is slow.
  2. If you lose your NFS mounts, you're screwed, and who knows what happens to your databases files.


Previous Next Contents