Subject: compress/out of memory
Date: Tue, 17 Dec 91 13:13:35 CST
From: johnsonm@stolaf.edu (Michael K. Johnson)
To: Linux-activists@joker.cs.hut.fi


This may have already been discussed, but I can't find it:
uncompress gives me "out of memory" "segmentation violation"
errors on some files, for instance uniq.Z as found on tsx-11.
I am running .11 on a 386sx-16.  Anyone have any ideas?  I had
no problems with compress under .10...

Also, just a note to whoever wants to know -- I made a new filesystem
with the mkfs distributed with .11, and used the -c flag, which was
supposeed to eliminate all my hd i/o errors when I was through.  Now,
it may well be my hard drive, but I don't know.  If anyone working on
filesystem stuff wants any details, I will be glad to provide them.

michaelkjohnson
johnsonm@stolaf.edu
I don't do .sig's

Subject: Re: compress/out of memory
Date: Tue, 17 Dec 91 14:30:10 -0500
From: tytso@ATHENA.MIT.EDU (Theodore Ts'o)
To: johnsonm@stolaf.edu
Cc: Linux-activists@joker.cs.hut.fi
In-Reply-To: Michael K. Johnson's message of Tue, 17 Dec 91 13:13:35 CST,
Reply-To: tytso@athena.mit.edu

   Date: Tue, 17 Dec 91 13:13:35 CST
   From: johnsonm@stolaf.edu (Michael K. Johnson)

   This may have already been discussed, but I can't find it:
   uncompress gives me "out of memory" "segmentation violation"
   errors on some files, for instance uniq.Z as found on tsx-11.

One of the things that will cause that is a corrupted .Z file.  Are you
sure you FTP'ed uniq.Z with the binary mode set?  It is possible that
uniq.Z on tsx-11 is corrupted, but I would rather doubt it, since I've
been able to uncompress it using a unix zcat.  

							- Ted

Subject: kermit
Date: Tue, 17 Dec 91 15:13:15 CST
From: johnsonm@stolaf.edu (Michael K. Johnson)
To: Linux-activists@joker.cs.hut.fi


Regarding my mailing a few hours ago, apparently my "uniq.Z" was corrupted.
However, this is worth noting, because it is not corrupted on tsx-11, where
I got it, and it is not corrupted on the unix system I ftp'd to either.  I
transfered it to my system using kermit, and it was apparently then that it
was corrupted.  I used
kermit -i -s uniq.Z
to send the file, and thought that image or ascii is determined by the
sender.  I *was* using the new kermit.  Is this a misunderstanding, a bug,
or a feature?

thanks much!

michaelkjohnson
johnsonm@stolaf.edu
I don't do .sig's

Subject: C++?
Date: Tue, 17 Dec 91 16:50:49 -0500
From: raeburn@ATHENA.MIT.EDU (Ken Raeburn)
To: linux-activists@joker.cs.hut.fi

Has anyone tried bringing up g++ on linux?

On a related note, are any other gcc2 developers or testers reading this
list, or should I start hacking on it when I get a machine (probably in
January or February)?

Ken

P.S.  If I understand correctly, c386 works on the small-memory machines
but doesn't handle ANSI C.  Has anyone tried using Ron Guilmette's
"unprotoize" program?  It includes a set of patches to gcc 1.something
(which have also been incorporated in gcc2) and a program that can
rewrite source files to add or remove prototypes and change function
definition forms.  A converted version of 0.11 might let the c386 users
get enough work done to get the VM system up to handling gcc.  It'd just
require someone who can use gcc on their machine to do the conversion.
(I *don't* advocate making converted versions of every release, or even
necessarily more than one.  If being able to use c386 is the goal, rather
than getting linux up so the VM system can be worked on, then c386 should
just be fixed.)  I'd volunteer to do it myself, if I had a machine to
work on, and if I knew someone would do the VM work...unfortunately,
neither is the case right now.

Subject: one bug, possible other problems
Date: Wed, 18 Dec 1991 00:48:24 +0200
From: Linus Benedict Torvalds < torvalds@cc.helsinki.fi>
To: Linux-activists@joker.cs.hut.fi

> This may have already been discussed, but I can't find it:
> uncompress gives me "out of memory" "segmentation violation"
> errors on some files, for instance uniq.Z as found on tsx-11.
> I am running .11 on a 386sx-16.  Anyone have any ideas?  I had
> no problems with compress under .10...

As already has been noted, this can be (and in this case seems to have
been) due to corrupted Z-files.  However, if anybody else sees the "out
of memory" error, even though there should be enough memory there is
always another possibility: if your "/etc/passwd" (or possibly
"/etc/group") files are not in the right format, it triggers a bug in
the old library which will use up all your memory.  Great fun.  Check
that all entries in your "/etc/passwd" have the right nr of colons (even
if there is no shell name, the last colon is supposed to exist).  This
same bug shows up in all programs that use the "getpwent()" library call
with the old lib: ls -l, chown et.al.  Easy to correct the /etc/passwd
file, and then this should go away. 

Then on to a /real/ bug, which needs kernel recompilation, but isn't
very noticeable: the execve system call incorrectly clears all signal
handler addresses. It should leave the "SIG_IGN" addresses as is: nohup
etc programs depend on that. The fix is easy:

linux/fs/exec.c: do_execve(), right after the comment "point of no
return" there is the for-loop that clears the sa.sa_handler entries.
Change it from (something like)

	for (i=0 ; i<32 ; i++)
		current->sa[i].sa_handler = NULL;

to

	for (i=0 ; i<32 ; i++)
		if (current->sa[i].sa_handler != SIG_IGN)
			current->sa[i].sa_handler = NULL;

Additionally you need to include signal.h at the top of the file. (Note
that this "patch" may not be exact - this is from memory).

> [ kermit send/receive binary ]

I use kermit to receive files, and even for terminal emulation, now that
linux correctly handles the ISIG flag. I always "set filetype binary" in
both ends, but don't know if it's really necessary (but it's easy to
make a ".kermrc" file in your home-directory to do these kinds of setups
automatically). I'd suggest you do likewise: then I know it works.

> Has anyone tried bringing up g++ on linux? [ gcc2 and c386 ]

I've been thinking about this (hi tytso :), and I will port gcc-2.0 (or
1.99) as soon as it is available.  That should contain g++.  Probably in
january-february says tytso.  During the holiday, I will implement VM
(real paging to disk): I've been going over the algorithms in my head,
and I think it can be done relatively easily, so hopefully we can scrap
the c386 compiler (sorry, blum).  The paging algorithm will at least in
the beginning be extremely simple: no LRU or similar, just a simple "get
next page in the VM-space".  Note that VM isn't (even with a good
algorithm) a "holy grail" - I don't want to be on the same machine when
someone uses gcc in 2M real mem.  Slooooowww.  4M will definitely be
recommended even with VM, and 1M machines won't work with linux even
with the VM (you /do/ need some real memory too :-).

I have this small question when it comes to the swap-space setup - two
different possibilities:

1 - a swap-device of it's own (ie own partition)

2 - file-system swap-area.

There are a number of pros and cons: (1) is faster, easier to implement,
and will need no changes when the filesystem eventually is changed to
accept >64M etc.  (2) has the good side that it means that it would be
easy to change the size of the swap-area, but if something bad happens,
it could probably trash the filesystem easily.  I'll probably implement
only (1) for now, but I'd like to hear what you say (and ignore it :-). 

		Linus

Subject: compress and VM
Date: Tue, 17 Dec 91 15:08:24 PST
From: pmacdona@sol.UVic.CA (Peter MacDonald)
To: linux-activists@joker.cs.hut.fi

I have seen a small problem with compress.  uncompress *.Z fails to 
uncompress some of the files specified.  Failure prints some 
message like EINVAL or something (-; sorry, its been weeks)
But using uncompress afterwards on the individual file works.
I wouldn't mention it, but for the compress problems posted.

Certainly the easiest swap to disk to implement is the best,
but using fdisk can be traumatic.  Perhaps the way MS Windows
(shudder) does it would be acceptable.  ie.  use the file system
but set the swap size at the boot up time (a number in the kernel 
ala root device?) and allocate a single large file.  

Thus changing the swap size 
requires only a reboot - after modifying the kernel.

Subject: Re: one bug, possible other problems
To: linux-activists@joker.cs.hut.fi
Date: 18 Dec 91 19:18:14 MET (Wed)
From: zaphod@petruz.sublink.org (Pietro Caselli)


> > Has anyone tried bringing up g++ on linux? [ gcc2 and c386 ]

Hmmmm, I'd like to test myself on gcc2 but ... where can I find it ?
( I asked archie but got no response :-( ) 
 
> I have this small question when it comes to the swap-space setup - two
> different possibilities:
> 
> 1 - a swap-device of it's own (ie own partition)
> 
> 2 - file-system swap-area.
> 
> There are a number of pros and cons: (1) is faster, easier to implement,
> and will need no changes when the filesystem eventually is changed to
> accept >64M etc.  (2) has the good side that it means that it would be
> easy to change the size of the swap-area, but if something bad happens,
> it could probably trash the filesystem easily.  I'll probably implement
> only (1) for now, but I'd like to hear what you say (and ignore it :-). 
 
Ok, start discarding my hints :-).

Solution 1) is the best, is faster, cleaner and easyer to mantain. But with
the fu###ing limitation of four partition for HD it gives to me a lot of
trouble. ( And think to many other )  Now I've a Dos partition, 2 Minix
partitions ( root and /usr ) and a Linux partition. I can't add any more :-(
 When Linux gets older, i.e when it'll have virtual consoles, pttys etc 
I'll be pleased to kill minix but ... I know It needs time. 

So, why not add both 1) and 2) ? 

( Christmass holidays are long, I'll go skiing but I think you'll have 
  plainty of time to make both :-) )

> 		Linus

Ciao. 
      Pietro


   Pietro Caselli                      | 
   internet: zaphod@petruz.sublink.org |      IF YOU MEET THE BUDDHA 
           : pietro@deis33.cineca.it   |       ON THE ROAD,KILL HIM. 
   Mail    : V. Pietro Mattiolo, 4     |
             40139 Bologna ITALY       | 

Subject: Re: one bug, possible other problems
Date: Fri, 20 Dec 91 11:03:59 -0500
From: tytso@ATHENA.MIT.EDU (Theodore Ts'o)
To: linux-activists@joker.cs.hut.fi
In-Reply-To: Pietro Caselli's message of 18 Dec 91 19:18:14 MET (Wed),
Reply-To: tytso@athena.mit.edu

   X-Mailer: W-MAIL 3.64/MINIX (11/13/90)
   Date: 18 Dec 91 19:18:14 MET (Wed)
   From: zaphod@petruz.sublink.org (Pietro Caselli)

   Solution 1) is the best, is faster, cleaner and easyer to mantain. But with
   the fu###ing limitation of four partition for HD it gives to me a lot of
   trouble. ( And think to many other )  Now I've a Dos partition, 2 Minix
   partitions ( root and /usr ) and a Linux partition. I can't add any more :-(
    When Linux gets older, i.e when it'll have virtual consoles, pttys etc 
   I'll be pleased to kill minix but ... I know It needs time. 

One solution to this which I've been kicking about, but haven't had time
to implement, is run-time configurable partitions.  So you could set up
a 96 meg partition for Linux (at least as far as the MS-LOSS
partitioning scheme is concerned) and have /dev/hd3 refer to it.  Then
in /etc/rc, you call a program which configures /dev/hd16 to be the last
32 megs of /dev/hd3, using ioctl's.  Turn on swapping on /dev/hd16, and
you're all set.

							- Ted