Subject: virtual consoles, shared text and paging
Date: Wed, 13 Nov 91 19:47:57 PST
From: pmacdona@sol.UVic.CA (Peter MacDonald)
To: linux-activists@joker.cs.hut.fi

Following are the three things required before I would 
consider leaving minix:

  shared-text
  virtual-consoles
  9600 serial

Is anyone out there interested or working on these?
I would be willing to take a stab at say virtual 
consoles ala Gordon Irlam, if I could get init/login/ttys
from Ari Lemmke first.  Comments?


Next I want to babble about something I know nothing about: paging

For those of us with small disks, devoting some of our precious space
for a paging partition is undesirable.  Which bring me to ask if we
truly need paging at all.  Is it possible/practical/desirable to
try to page executables from the file system?  A working set size
could be established and when ram got tight, little used pages 
freed for data requirements.  Maybe code blocks should only 
be loaded on demand?  

Perhaps on invocation, a map could be formed indicating where each
block of an executable was on disk.  Of course, if the image file
were subsequently deleted, then you have a problem. 

Since I am as near illiterate on OS paging system designs as
you can get, feel free to enlighten me.

Subject: linux, compilers and paging
Date: Fri, 15 Nov 91 13:11:35 PST
From: pmacdona@sol.UVic.CA (Peter MacDonald)
To: linux-activists@joker.cs.hut.fi

One of the things I find anoying about minix is having multiple compilers
(just bcc and gcc).  Just using one compiler (gcc in linux) really helps
adhere to the KISS principle.   But as Robert Blum points out, gcc is 
a hog.  I guess this is why we see the 1.7 Meg buffer caches.

While paging might be nice, you still have to reload gcc N times to compile
an N module program, for the typical makefile.  The only other way I see 
around the huge buffer cache is to implement the sticky bit to keep a 
programs pages in memory.  

The other alternative, #ifdef GCC, #ifdef C386 etc throughout the code, and
having multiple object file types about, is even less savory.  I would
rather live with setting and clearing sticky bits before and after heavy
compile sessions.  I wonder what others think though?

Subject: Important!  Bug in /usr/include/ctypes.h
Date: Fri, 15 Nov 91 18:42:58 -0500
From: tytso@ATHENA.MIT.EDU (Theodore Ts'o)
To: linux-activists@joker.cs.hut.fi
Reply-To: tytso@athena.mit.edu

I found the problem which caused "mcopy a:*.c ." to files with
unreadable filenames.  The problem was that it was trying to convert the
uppercase MS-DOS filenames to lowercase, and this was failing because
tolower() is incorrectly defined in ctype.h: the plus sign should be a
minus sign.

diff -r1.1 ctype.h
31c31
< #define tolower(c) (_ctmp=c,isupper(_ctmp)?_ctmp+('a'+'A'):_ctmp)
---
> #define tolower(c) (_ctmp=c,isupper(_ctmp)?_ctmp+('a'-'A'):_ctmp)

If you are trying to port programs to Linux, you should apply this patch
to your ctype.h --- it could save you a lot of aggravation.  (I know
that it will cause problems with flex at the very least, and probably
many other programs.)

------------------------------------------

I agree with Peter McDonald's observation that it is extremely nice to
have only compiler for Linux, and it would be a shame if one needed to
have both compilers around, depending on which compiler was used by a
developer.  At the very least, it would make the code uglier as people
put in porting #ifdef's.  gcc has a lot to recommend it, especially
since version 2.0 gives you g++ and Objective C basically for free.

I note that once Linux has paging, gcc will be able to work with 2 meg
machines, although granted, it will be slower than normal.  I don't know
what kind of speed hit gcc on a 2 meg machine with paging would take,
but perhaps this would be sufficient that we can just have one main
compiler for Linux.

Also, memory is getting relatively cheap these days --- we're talking
maybe US$30 to US$40 per megabyte if your machine can take SIMMS.
Upgrading a machine from 2 meg to 4 meg doesn't cost *that* much money.
As long as the system will run gcc (albeit slowly), would this alleviate
concerns about insufficient support of 2 meg machines?

----------------------------------------

Along the lines of having only one compiler for Linux, I am currently
looking into how it might be possible to assmeble the 16-bit binary
portions of Linux, so that we could just use gas to compile the boot
sector and setup code.  There are a bunch of issues to deal with,
including modifying gas to emit 16-bit object files, which doesn't look
that hard.  

Another problem is that the gas format is using a "source, destination"
convention, while the as86 assembler seems to be using a "destination,
source" convention.  If absolutely necessary, I can figure some of the
unclear translations by looking at the object code in the boot sector
and the matching it up against the opcodes in the gas sources, but this
seems rather crude and unpleasant.  Does someone have a quick conversion
chart between the two assemblers which they could send to me?  This
would save me a lot of dirty work.  Thanks!

						- Ted