Tech Insider					     Technology and Trends


		   Linux Activists Mailing List Archives

From: wirzeniu@klaava.Helsinki.FI (Lars Wirzenius)
Newsgroups: comp.os.linux
Subject: Stabilizing Linux
Summary: Stability is good, but development can continue regardless
Date: 6 Aug 92 12:54:41 GMT
Organization: University of Helsinki

Kenneth Falck raises the question of whether Linux should
stabilize a bit when 1.0 is released.  The same concern has been
raised by others earlier, as well.  (Another related question is
what exactly 1.0 should include, but I shan't go into that very
much, I'm currently more interested in more general qualities.)

There are good reasons for a bit of stability, at least for some
people.  Namely those who can't keep up with weekly kernel patches
and updates to other major system software, either because of
lacking skills or problems with getting new software, or won't
because it takes too much of their time.  Many of these people
would prefer something that they can just snarf from an ftp site,
or even buy from a distribution company, install in an afternoon,
use until the next major update comes in six months or a year, and
not be bothered about it until then. 

These people are what could be called ordinary users, although
they usually don't quite follow the stereotype for ordinary users,
you know, those people who only know enough of computers to turn
them on and to start one application.  (Perhaps they should be
called knowledgeable users, or something.  Never mind.)

Now it is true that Linux is a hacker's kernel: it is made by a
hacker for other hackers to use and hack on.  You also more or
less need to be a hacker to install and use it and especially
software for it.  However, Linux also has the potential for a
wonderful environment even for ordinary users, and I think that we
should do our best, or at least do something, to cater for these
people, if for no other reason but that it would be a pity if all
the hard work that has been put into Linux to not be useful for
"real work".

One major requirement for a stable system is a stable kernel.  Of
course, we need more than that to generate a real interest for
Linux among users.  We also need to make the system easy to
administer, which is one of Unix' weaker points, and most
importantly we need applications.  Unfortunately, there is
relatively little we can do about the applications, since there
aren't very many ordinary user-type applications for Unix around,
and we probably can't write all of them ourselves.  The best we
can do in this regard is to make sure that whatever there is runs
on Linux, at least as far as the source is available.  (Who knows,
if Linux stabilizes enough to become popular, we might even get
commercial vendors to port things to Linux, if it is easy enough.)

Administration, on the other hand, is something we can do
something about.  Since Linux does not have to be as compatible as
possible with other systems in this regard, or at least not as
much as for example 386BSD does, we have the opportunity to really
do something about it.  It may be a lot of hard work, but I feel
that it should be possible to make Linux almost as easy to
administer as DOS is at best, i.e.  almost no work for humans,
everything is automated.  Somebody said in gnu.misc.discuss (I
think) just today (or yesterday) that a similar idea had been
crushed inside Sun some years ago.  However, if the whole system
more or less changes completely once a month, there isn't very
much we can automate. 

In addition to generating more general interest towards Linux,
there are other advantages from a stabilization as well.  It would
give an opportunity to clean up the ftp sites, which are currently
in a state of mess -- there are binaries from January, which
probably aren't very useful anymore.  There are also binaries that
use old versions of the shared libraries, which aren't necessarily
available anywhere. 

It would also give the various documentation projects an
opportunity to catch up with the rest of the Linux development. 
One reason they are lagging behind is that when they move n steps
forward, the kernel and other programs move n*n steps. 

For a really stable system we also need to stabilize the major
system software, especially gcc and the libraries.  This shouldn't
be a major problem, as gcc is even now fairly stable, and with the
advent of shared libraries with jump tables it should even be
possible to install new libraries without recompilation or
relinking.

That's about 80 lines of reasons why we should stabilize Linux. 
There are also reasons why we shouldn't do it.  One is that Linux
is exciting to hackers partly because it evolves so much, and if
it stops evolving, they might get bored and go away.  Then we
wouldn't have stability, we would have a dead system.

Another thing against stability is bug fixes: there will always be
bugs.  Also drivers for new hardware, new types of filesystems,
and similar things will be developed, even after a stabilization. 
All of this means new versions, patches, multiple versions,
parallel versions and development, and all the other things that
are prevalent now and make Linux so much less unattractive for
ordinary users. 

The dilemma is therefore that all the developers and hackers want
a quickly developing system, and all the ordinary users (and
people who can't devote their free time to Linux) want a stable
system that won't evolve very quickly.

Well, I think we can have it both ways, with a bit of hard work. 
What I propose is this (getting to the point already, are we?):
We'll let the kernel evolve towards what we want 1.0 to be, at
least featurewise.  After we reach a point where all the features
we want are present, we freeze that version of the kernel as 0.99,
and refuse to put in new features (unless they are essential). 

After the kernel has been tested thoroughly, for a few weeks or
so, and all the major bugs have been fixed, we release it as 1.0. 
Then we consider this as the baseline, and create one official
release that is easy to install, easy to administer, contains
everything necessary and a lot of unnecessary but probably widely
useful things and package everything neatly.  

This package is what should be called Linux, not just the kernel. 
Compare with, say, SVR4: it is not just a kernel, but a whole
bunch of software.  Also the whole package is given its own
version number, instead of using the kernel's version number as is
currently done. 

The whole package will then be put into all the major Linux ftp
sites, and those sites are cleaned up so that there are no old
releases that confuse people.  This is then announced and
advertised as the official released version of Linux which should
be used by everybody but hackers and others who like to have
problems.  It is also advertised to remain that way for at least a
few months, so that the ordinary users will dare to try it out. 
No new functionality will be added during that time to the
official version, if possible.  If would be preferable to avoid
even bug fixes, unless they are of urgent nature, or at least
collect them into neat, well documented (so it is clear whether it
is necessary to install the fix, not everybody may suffer from the
bug) packages that are released once a month or so. 

In the mean time, all the hackers will be busy fixing things and
writing new things.  They'll release patches, complete new
versions, etc, just like now.  There is nothing that stops Linus
from releasing completely new versions of kernel, for instance.

When enough bug fixes, new features, etc have been developed,
_and_ thoroughly tested (i.e. the hackers have been using them for
several weeks without any necessary changes), a new release of the
complete Linux system can be released, say Linux 1.1 or something.

Under this scheme we would then have a more or less stable
official system with new, tested releases every six months, and a
possibility to use newer, unofficial and untested versions of
various pieces if you want to. 

The versions various hackers use most probably won't get their own
version numbers, as they are most likely to consist of bits and
pieces from everywhere, and probably nobody has exactly the same
system as everybody else.

So far I have been talking about one holy, official release of the
complete system.  This does not mean that I wish to forbid
alternative releases like we have now (the rootdisk, MCC, mj,
tamu).  Quite the contrary, I think they are useful and valuable. 
But I do think that we need one official version that we can point
at and say that "That is Linux".  One official version, with a
clearly defined contents and version number for each component and
the whole, which is easily findable, is the way the ordinary users
need and want it. 

The big problem with this scheme is that we need somebody to
coordinate things and to integrate and package everything.  This
is a lot of hard, unpleasant work which most people would rather
avoid.  Actually, I think that the ABC-Release is more or less
what this is all about.  However, I have not heard very much of it
lately. 

This article is already too long, but I will summarize the major
points:

-	non-hackers need stability and ease of use
-	hackers want a lot continuous, exciting development
-	we can have both, if we create one stable official release
	which contains "everything" every six months, and have a
	lot of "testing" or "unofficial" versions that non-hackers
	are discouraged from using

So what do you think, is this the way to go?

--
Lars.Wirzenius@helsinki.fi

Newsgroups: comp.os.linux
Path: sparky!uunet!haven.umd.edu!darwin.sura.net!
zaphod.mps.ohio-state.edu!malgudi.oar.net!caen!destroyer!
news.iastate.edu!pv141b.vincent.iastate.edu!sheldon
From: shel...@iastate.edu (Steve Sheldon)
Subject: Re: Stabilizing Linux
Message-ID: <sheldon.713197562@pv141b.vincent.iastate.edu>
Sender: n...@news.iastate.edu (USENET News System)
Organization: Iowa State University, Ames IA
References: <1992Aug6.125441.22427@klaava.Helsinki.FI>
Date: Fri, 7 Aug 1992 14:26:02 GMT
Lines: 73

In <1992Aug6.125441.22...@klaava.Helsinki.FI> 
wirze...@klaava.Helsinki.FI (Lars Wirzenius) writes:

>Kenneth Falck raises the question of whether Linux should
>stabilize a bit when 1.0 is released.  The same concern has been
>raised by others earlier, as well.  (Another related question is
>what exactly 1.0 should include, but I shan't go into that very
>much, I'm currently more interested in more general qualities.)

 I have to agree with Lars completely on this.  He brings up some
very good points.

>We'll let the kernel evolve towards what we want 1.0 to be, at
>least featurewise.  After we reach a point where all the features
>we want are present, we freeze that version of the kernel as 0.99,
>and refuse to put in new features (unless they are essential). 

 Yes, we need to specify what features would be nice in this v1.0.
And then not deviate too much from the feature list, just perform
bug fixes, to help stablize it.

 Obviously, people should be encouraged to work on new featureful
ideas, but keep these as a parallel development to be included in
v1.1, or whatever.

 I'd like to see something out by Jan93, maybe.  I think it advisable
that we not get into a situation like Commodore's AmigaDOS 2.04.  This
started out as a v1.4 which will be out by Spring '89, and then into
a v2.0 for final release in '90, and ended up as 2.04 and didn't get
released to the general public until fall '91.  While I was not an
Amiga developer, it sounded like it had a creeping feature syndrome, and
they kept adding stuff and testing became quite a burden.

 The problem with this scenario, is that while people are waiting and waiting
for the stabilized release, they are more likely to loose interest, and go
do something else.  I'd like to bait & hook a few more people, and show
them the wonders of *nix, which I am just now getting to know myself.

>After the kernel has been tested thoroughly, for a few weeks or
>so, and all the major bugs have been fixed, we release it as 1.0. 
>Then we consider this as the baseline, and create one official
>release that is easy to install, easy to administer, contains
>everything necessary and a lot of unnecessary but probably widely
>useful things and package everything neatly.  

 test...test...test...  It's a tough job, but necessary.

>This package is what should be called Linux, not just the kernel. 
>Compare with, say, SVR4: it is not just a kernel, but a whole
>bunch of software.  Also the whole package is given its own
>version number, instead of using the kernel's version number as is
>currently done. 

 Yes, most definitely.  A package which all you need to do is stick a 
boot disk into the machine, and walk you thru the install.  Much like
any of these commercial Unixes, SCO, Dell, etc.
 I'd love to see a full package which included the administration utilities,
the documentation, manual pages, networking software, X386, etc.  Only
install what you want/need.

 I'd also like to see as part of this "release" a couple of "manuals"
which the user could print out from the DOS side.  Something that
is nicely formatted, and ready to copy to the printer.  An "installation
guide", and a "user's guide".  The installation guide would go step
by step thru the installation, and the user's guide would describe just
what is included in the package, and go over some of the basics of unix, 
as well as basic setup and administration.  There are thousands of pages
already written on Unix, and I don't expect we could cover everything.
But certainly the basics, and the things specific to Unix.
 
 Perhaps it would be a good idea to continue a discussion on what features
we should realistically expect to include in v1.0?  And work towards that
goal.

From: zlsiial@uts.mcc.ac.uk (A. V. Le Blanc)
Subject: Re: Stabilizing Linux
Date: 7 Aug 92 18:35:30 GMT
Reply-To: LeBlanc@mcc.ac.uk (A. V. Le Blanc)

In article <1992Aug6.125441.22427@klaava.Helsinki.FI> 
wirzeniu@klaava.Helsinki.FI (Lars Wirzenius) writes:
>There are good reasons for a bit of stability, at least for some
>people....
>These people are what could be called ordinary users....
>Now it is true that Linux is a hacker's kernel: it is made by a
>hacker for other hackers to use and hack on.  You also more or
>less need to be a hacker to install and use it and especially
>software for it....

It seems to me that there are two groups of people who might use Linux:
hackers (like me, perhaps) and Lars's ordinary users.  Now I have tried
to produce a version of Linux for ordinary users, or as a starting point
for hackers: the MCC interim version.  I really don't think it is
possible to have a state-of-the-art Linux system unless you update it
every week, and it is (a) not possible for me, alone or with help, to
produce a new, properly documented, version of Linux (in the sense of
kernel, utilities, compiler, X, ...) every week, and (b) not a good
idea for ordinary users to upgrade every week even if they could.

There are now 3 such versions of Linux, all of which have strong points
(and weak points (blush)).  Now, I have always seen this as a Good
Thing, in that an ordinary user can get the MCC or TAMU or MJ version
to begin with, follow on with that or switch to another at odd
intervals, and have a reasonable system.  I confess I don't understand
why those who object to or have no time for hacking cannot take one
of these and stick to it.  Perhaps one of these people can explain this
to me.  Obviously, you will have to wait for bug fixes and new features,
but that's life, n'est-ce pas?

>In addition to generating more general interest towards Linux,
>there are other advantages from a stabilization as well.  It would
>give an opportunity to clean up the ftp sites, which are currently
>in a state of mess -- there are binaries from January, which
>probably aren't very useful anymore.  There are also binaries that
>use old versions of the shared libraries, which aren't necessarily
>available anywhere. 

Yes, cleaning up and reorganising the ftp sites is a big need.  Ted
seems to be working on this at tsx-11 (when he has a few minutes free
from Kerberos), and there have been some changes at nic as well.
Unfortunately, it takes a lot of effort keeping an ftp site up to
date.  And, of course, and very sadly, banjo.concert.net is shutting
down as an ftp site on August 14.

>Another thing against stability is bug fixes: there will always be
>bugs.  Also drivers for new hardware, new types of filesystems,
>and similar things will be developed, even after a stabilization. 
>All of this means new versions, patches, multiple versions,
>parallel versions and development, and all the other things that
>are prevalent now and make Linux so much less unattractive for
>ordinary users. 

Parallel versions are a disaster.  I think the rapid development of
the kernel has actually discouraged parallel versions, which would
quickly fall behind.  Moreover, Linus's willingness to incorporate
so much into the kernel has contributed substantially to the absence
of the plethora of mutually incompatible patches which plagued MINIX.

>The whole package will then be put into all the major Linux ftp
>sites, and those sites are cleaned up so that there are no old
>releases that confuse people.  This is then announced and
>advertised as the official released version of Linux which should
>be used by everybody but hackers and others who like to have
>problems.

Putting together such a package is a major headache, especially if
(like me) you don't trust anything you haven't compiled yourself.
Will you fix bugs in all the binaries as well?  This will make
day-to-day ftp's and updates necessary for those who abhor insects.

>Under this scheme we would then have a more or less stable
>official system with new, tested releases every six months, and a
>possibility to use newer, unofficial and untested versions of
>various pieces if you want to. 

I had the impression that this was what Linus was trying to do with
his weekly patches.  Unfortunately xxxd at patchlevel 4 tends to be
more stable and bug-free than xxx with no patches, though we must
admit that the bugs in Linux are mostly pretty feeble these days,
except of course in the new features.

>The big problem with this scheme is that we need somebody to
>coordinate things and to integrate and package everything.  This
>is a lot of hard, unpleasant work which most people would rather
>avoid.  Actually, I think that the ABC-Release is more or less
>what this is all about.  However, I have not heard very much of it
>lately. 

Am I correct in assuming that the withdrawal of banjo.concert.net
as an anonymous ftp site will effectively do away with the ABC release?

>This article is already too long, but I will summarize the major
>points:
>
>-      non-hackers need stability and ease of use
>-      hackers want a lot continuous, exciting development
>-      we can have both, if we create one stable official release
>       which contains "everything" every six months, and have a
>       lot of "testing" or "unofficial" versions that non-hackers
>       are discouraged from using
>
>So what do you think, is this the way to go?

I would prefer to see the unofficial versions be stable and comprehensive,
while the official versions would keep moving.  I don't think this is
very far in spirit from Lar's suggestion, though it sounds the opposite
in words!  What I mean is this: a solid, stable version can be used
by the ordinary users, while the rapidly moving kernel will be drawing
us hackers and innovators.  In fact, this is much the way things are
at the moment.  I confess, I don't understand why some people are
unhappy about it, except, of course, for the problem of disorganised
ftp sites -- including mine.

Path: sparky!uunet!dtix!mimsy!ra!tantalus!eric
From: e...@tantalus.dell.com (Eric Youngdale)
Newsgroups: comp.os.linux
Subject: Linux CDROM (Was stabilizing Linux)
Message-ID: <3248@ra.nrl.navy.mil>
Date: 8 Aug 92 03:50:06 GMT
References: <1992Aug6.125441.22427@klaava.Helsinki.FI> 
<sheldon.713197562@pv141b.vincent.iastate.edu>
Sender: use...@ra.nrl.navy.mil
Organization: Naval Research Laboratory
Lines: 94

In article <sheldon.713197...@pv141b.vincent.iastate.edu> 
shel...@iastate.edu (Steve Sheldon) writes:
> I'd like to see something out by Jan93, maybe.  I think it advisable
>[...]
> Perhaps it would be a good idea to continue a discussion on what features
>we should realistically expect to include in v1.0?  And work towards that
>goal.

	I have one bit of news that would certainly be relevant, and could help
to ensure that we do not dawdle too much.  Robert Bruce, from Walnut Creek
CDROM wants to make a Linux/386BSD/GNU/X11 CDROM (They currently have a GNU/X11
disc (source + sparc binaries), among others).  He indicated to me that it will
contain Linux binaries and source code, 386bsd binaries and source code, GNU
source code, and the X11 distribution, all on one disc (provided that it all
fits).  I am not sure of the exact timetable, but I think that he is looking
toward the fall sometime for a release date.

	To me this sounds like an excellent opportunity to get our collective
acts together and get a 1.0 release out the door.  About a month ago, I asked
the question "What is left to be done for 1.0", and we got a fairly spirited
discsussion going.  There were a couple of major points, and an abbreviated
list was:

	(1) The new extfs being considered well-tested and no known bugs. 
	(2) Sharable libraries that are downward compatible(jump tables).
	(3) dosfs being reliable enough to be considered well tested .
	(4) All kernel/library hooks required for TCP/IP.  (post 1.0??).
	(5) Filesystem with larger block sizes (4Kb).
	(6) An isofs to read the ISO9660 CD-ROM format.
	(7) Some kind of logo to print on the cdrom disc :-)

As I look at the list, 1-3 are all being tested, and are well on the way to
becoming stable.  Number 4 was in the post 1.0 list, but apparently the hooks
are already present in 0.97pl2.  Number 5 is in the works, but is not really
essential for a 1.0 release.  Number 6 was also in the post 1.0 list, but if we
are going to have a linux cdrom for linux 1.0, then we sure as heck need the
filesystem for the 1.0 release (the CDROM code is in beta-testing).  Where does
this leave us?  In general I think we are in pretty good shape on the kernel
end.  We just need to continue testing.  It is not clear to me that we really
*need* any features other than those I have already mentioned for a 1.0
release.  The cdrom code is not part of the official distribution yet, but this
is not a big issue.

	I threw in item 7 as sort of a joke, but I am only half kidding.  I
have the GNU/X11 disc in front of me (the one with sparc binaries), and it has
the X logo, the Sun users group logo, and the GNU logo (basically a Gnu :-)).
It would be neat to have a Linux logo of some kind for the disc, although we
really do not need it.  I suspect that there are people with some graphics arts
skills who can design a really neat logo of some kind and translate it into the
postscript language.  If anyone wants to give this a shot, describe your idea
to the list, and see how the rest of us like it.  If we get more than one, we
might even vote on it.  It would be best to keep it to one or two colors.  The
only reason that I said postscript was so the rest of us could easily see it
:-).

	On a more serious note, one area of concern that I have is the status
of the archives.  A cdrom distribution will probably just be a snapshot of one
of the archives, and since we are losing banjo, I suppose that tsx-11 will be
'it'.  If there are components (i.e. man pages, mcc) that should be included
but are not on tsx-11, then I should be informed of this, so that I can pass
this along to Robert Bruce.  We should make an effort to make sure that *all*
stale binaries have been expunged from tsx (or wherever), and replaced with
something current.  We should make sure that all stale readme files on tsx are
updated.  My feeling is that the archive maintainers already do enough, and we
should not have to ask them to take on this additional burden by themselves.
Instead, we should each be diligent about finding and reporting stale materials
in the archives.  It might even be a good idea to create a parallel tree, and
only move over those things that are known to be good.

	While we are on the subject of cdroms under linux, I will release
beta-2 of the Linux CDROM distribution sometime over the weekend.  I basically
want to try out the kernel patches with 0.97pl1.  There have been a number of
improvements:

	*) SCSI error handling/correction bugs fixed.
	*) SCSI cdrom code now uses scatter/gather.
	*) Rock Ridge extensions to ISO9660 standard are now in place
	   The RR extensions allow for:
		- Longer filenames.  My filesystem limits to 256 characters.
		- Filenames can have mixed case, standard unix syntax.
		- Values supplied for file modes, nlinks, uid/gid.
		- different times for atime, ctime, mtime.
		- Symbolic links.
		- Block and character devices.
		- Deep directories (iso9660 limits depth to 8).
	(Robert Bruce has indicated that he intends to use Rock Ridge when
	making the Linux/386BSD disc).


-Eric 


--
Eric Youngdale
e...@tantalus.nrl.navy.mil





From: wirzeniu@klaava.Helsinki.FI (Lars Wirzenius)
Subject: Re: Stabilizing Linux
Date: 8 Aug 92 11:59:06 GMT

eps@rieska.oulu.fi writes:
>In article <1992Aug6.125441.22427@klaava.Helsinki.FI>
>wirzeniu@klaava.Helsinki.FI (Lars Wirzenius) writes:

>I'm afraid that if linux is made easy to administrate, the normal
>unix-tricks wouldn't work anymore. 

Good point, but I think we can build something that works on top of the
standard Unix way and is just an insulating layer for people who don't
want to learn the nitty gritty things.

>Hm. If there are many parallel versions, then there will certanly be
>problems trying to merge different features to one kernel. It can lead
>to frustration and boredom. Just my 0.02 FMk worth.

Yup, and that's what we more or less have now.  Hackers seem to like it,
and even prefer it, though.  Ordinary users definitely don't.

>I disagree. I understood that to mean that ftp-site would contain
>only the 'official' releases. However: what makes linux so attractive
>(at least for me) is it's variety: you can get what tools and parts
>you wish.

Nope.  The ftp-sites would continue to carry all the hackish versions
and things as before (how else are we going to develop things further?). 
The cleaning up part referred to things like the ancient make that only
runs as root, or old binaries that no longer work because of changed
system calls numbers, or because the use old shared libraries which
nobody has anymore, and things like that.  I don't even mind if these
are kept on-line, but they mustn't be intermingled with the stuff meant
for non-hackers. 

>Why 'releases'? The current situation with individual files is nearly o.k.
>Release requires more work to complete that a patch or package?

I was talking about "releases" as more or less complete systems like the
rootdisk+bootdisk combo, or the MCC interim release, which are what the
"official release" would be like (in spirit if not in implementation).  

I don't think it is ok (for ordinary users, hackers would probably be
comfortable with picking up each byte from a different file, and
hand-entering them into a debugger to get something bootable :) to have
all the "necessary" pieces (boot+root+gcc+X+emacs+make+awk+...) in
different packages.  It is just too much work.

>Hm. But if the interfaces in kernel are clean enough (thay almost are
>already) one can just take clean 'official' kernel and apply new packages
>and patches to it. If interfaces (like new IRQ handling is) are clean
>and general enough, the collisions could be handled easily.

The kernel interfaces are irrelevant.  I was talking about things like
making sure that all programs are configured correctly, they all use the
same shared libraries, all bug fixes are installed, that all the
documentation is up to date, and that everything forms a coherent whole.
Also, it would be pretty hard work to collect all bug reports and fixes
and keep track of all the new development.

>Anyway: my dream:
>One takes an clean kernel from ftp-site, some packages (some device drivers,
>nfs support, support for network XYZ and video-card supprt for XXXZ)
>and runs an configuration program, which examines all the packages,
>sees if they collide and if not, it builds the configuration files.
>Then it is only make and go. 

The trouble is that ordinary users don't want to collect a large number
of packages to only get the operating system.  They want to get one
package, install that, and have all the basic stuff working.  Then they
will go and get a couple of more packages, each of which contains an
application (or something similar), install those, and start working. 
It would probably even be preferable if some applications were part of
the basic package.  (It of course needs to be possible to install only a
subset of the basic package, and to leave out device drivers etc that
aren't necessary.)

One thing that has struck me is that my vision of an "ordinary user's
Linux" is somewhat similar to the way Dell's Unix is represented on the
net.  Dell has taken the basic SVR4 and made it easy to install and
administer and also added all the freeware they have found, and made
that as easy to install.  It would be great if we could get something
similar for Linux, i.e. one system which has _everything_ one could ever
dream about in a basic system.  (I don't know if Dell actually is like
this, I've never even seen it.)

--
Lars.Wirzenius@helsinki.fi

Newsgroups: comp.os.linux
From: f6930910@scheme.cs.ubc.ca
Subject: Re: Linux CDROM (Was stabilizing Linux)
Reply-To: f6930910@scheme.cs.ubc.ca
Organization: The Internet
Date: Sun, 9 Aug 1992 00:34:22 GMT

eric@tantalus.dell.com (Eric Youngdale) writes:

|	To me this sounds like an excellent opportunity to get our collective
|acts together and get a 1.0 release out the door.  About a month ago, I asked
|the question "What is left to be done for 1.0", and we got a fairly spirited
|discsussion going.  There were a couple of major points, and an abbreviated
|list was:
|
|	(1) The new extfs being considered well-tested and no known bugs. 
|	(2) Sharable libraries that are downward compatible(jump tables).
|	(3) dosfs being reliable enough to be considered well tested .
|	(4) All kernel/library hooks required for TCP/IP.  (post 1.0??).
|	(5) Filesystem with larger block sizes (4Kb).
|	(6) An isofs to read the ISO9660 CD-ROM format.
|	(7) Some kind of logo to print on the cdrom disc :-)
|....
|--
|Eric Youngdale
|eric@tantalus.nrl.navy.mil

Firstly, I continue to be amazed at Linux's functionality.  Compiling
the kernel while playing xtetris, all on freely distributable software
is almost unbelievable.  I followed this thread the last time it came around,
and will say again what I said then.  Linux has a lot further to go than
this list to be an operating system.  A kernel does not an operating
system make.

Linux needs a single, complete SOURCE TREE that we can point at and
say "That's Linux".  The MCC interim release was called 'interim'
for a reason.  I think it is great that there are lots of functional
systems floating around, but that isn't an operating system.  That
is a kernel with lots 'o software scattered about.  I don't think that
a snapshot of tsx-11 constitutes an operating system.

To become an operating system, Linux needs to look like Berkeley's
Net-2 tapes, or the USL source tree, or the VMS source tree, or
any other complete system.  This will be a major undertaking
(seeing to it that there are manuals for everything would alone
be a major undertaking).

The Linux kernel is the most amazing peice of software I have
ever witnessed -- but it is not an operating system.

Just my $0.02.

- Ken

From: nelson@crynwr.com (Russell Nelson)
Subject: Stabilizing Linux 
Date: Sun, 09 Aug 92 04:01:12 GMT

In article <1992Aug6.125441.22427@klaava.Helsinki.FI> 
wirzeniu@klaava.Helsinki.FI writes:

   There are good reasons for a bit of stability, at least for some
   people.  Namely those who can't keep up with weekly kernel patches
   and updates to other major system software, either because of
   lacking skills or problems with getting new software, or won't
   because it takes too much of their time.

Or because they'd actually like to *use* Linux.  I find that some
programs I want to hack at, while other programs I just want to *use*.

I write a lot of Ethernet drivers.  I'd like to write some for Linux.
But that means becoming a kernel hacker.  I'd be much happer with a
Linux that had runtime loadable device drivers.  You see?  I just
want to be a user as far as the kernel is concerned.

   That's about 80 lines of reasons why we should stabilize Linux. 
   There are also reasons why we shouldn't do it.  One is that Linux
   is exciting to hackers partly because it evolves so much, and if
   it stops evolving, they might get bored and go away.  Then we
   wouldn't have stability, we would have a dead system.

The hackers are free to hack.  The rest of us want a usable Linux
with a known bug list.

   Another thing against stability is bug fixes: there will always be
   bugs.  Also drivers for new hardware, new types of filesystems,
   and similar things will be developed, even after a stabilization. 
   All of this means new versions, patches, multiple versions,
   parallel versions and development, and all the other things that
   are prevalent now and make Linux so much less unattractive for
   ordinary users. 

Right, as I said above.

   and refuse to put in new features (unless they are essential).

I agree.  You can't get rid of bugs if you're always adding new features.

   the ordinary users will dare to try it out.

Right!  I don't want to use a Linux that 1) will probably crash, 2) take my
filesystem down, and 3) has no fsck.

   But I do think that we need one official version that we can point
   at and say that "That is Linux".  One official version, with a
   clearly defined contents and version number for each component and
   the whole, which is easily findable, is the way the ordinary users
   need and want it.

Yes, and I think we need Linus to bless it.

   The big problem with this scheme is that we need somebody to
   coordinate things and to integrate and package everything.  This
   is a lot of hard, unpleasant work which most people would rather
   avoid.  Actually, I think that the ABC-Release is more or less
   what this is all about.  However, I have not heard very much of it
   lately. 

Right, that's why I propose a paid Linux coordinator in a separate
message.  We need someone whose first priority is Linux!   This is
not to denigrate all the volunteers.  They have a different first
priority, to make a living (or stay in school), so that they can
continue to donate time to Linux.

And, in view of the AT&T--BSDI--CSRG lawsuit, I think we would do
well to have the coordinator look after legal issues also.

-russ < nelson@crynwr.com>  I'm proud to be a humble Quaker!
Crynwr Software            Crynwr Software sells packet driver support.
11 Grant St.               315-268-1925 Voice
Potsdam, NY 13676          315-268-9201 FAX

From: wirzeniu@klaava.Helsinki.FI (Lars Wirzenius)
Subject: Re: Stabilizing Linux
Date: 9 Aug 92 10:48:34 GMT

LeBlanc@mcc.ac.uk (A. V. Le Blanc) writes:
>I really don't think it is possible to have a state-of-the-art Linux
>system unless you update it every week,

A state of the art system (which means rapidly changing) is not what
ordinary users are looking for, I think.  They prefer stability, ease of
use, ease of installation, and something that works with the minimum of
trouble.  Basically, they want something they can more or less ignore
most of the time.

>There are now 3 such versions of Linux, all of which have strong points
>(and weak points (blush)).  Now, I have always seen this as a Good
>Thing, in that an ordinary user can get the MCC or TAMU or MJ version
>to begin with, follow on with that or switch to another at odd
>intervals, and have a reasonable system.  

Apart from the fact that multiple, competing releases cause a bit of
confusion, I too agree that different releases is a good thing (as long
as they are really different, not just basically the same thing in
different packages and minor changes; I don't think the current releases
are that). 

>I confess I don't understand why those who object to or have no time for
>hacking cannot take one of these and stick to it.  Perhaps one of these
>people can explain this to me.  Obviously, you will have to wait for bug
>fixes and new features, but that's life, n'est-ce pas?

I don't think that waiting for bug fixes and new features is the
problem.  Judged from the e-mail respone I have received, there are a
number of users that are just uncertain of what the various
possibilities are, what they contain, what their relative strengths are,
and which one would best serve their needs.  This situation could be
improved upon by providing better documentation, but it'd still be
confusing.  

Also, with my vision of a stable 1.0 there wouldn't be much need for bug
fixes, nor for new features or drivers, as it would already be
well-tested, and would contain all the normally needed features and
drivers.  (Yes, I am optimistic and I do a lot of day-dreaming. :)

>Putting together such a package is a major headache, especially if
>(like me) you don't trust anything you haven't compiled yourself.
>Will you fix bugs in all the binaries as well?  This will make
>day-to-day ftp's and updates necessary for those who abhor insects.

I understand that it is not easy to do this (if it were, it wouldn't
even be worth discussing, I'd just do it :).  As I noted somewhere
earlier, bug fixes should be released relatively seldom, say once a
month.  The stable release needs to be tested with something else than
Linus' method, so that we can get the bugs out before it is released.  I
assume that having a few hundred persons (including hackers, hopefully)
use it for a few weeks should be enough of testing.

>What I mean is this: a solid, stable version can be used
>by the ordinary users, while the rapidly moving kernel will be drawing
>us hackers and innovators.  

Sigh, this more or less summarizes my 180+ lines.  I don't think I'm
destined to become a writer. :)

>In fact, this is much the way things are at the moment.  

I have got the opposite impression, but I could well be wrong, since I
haven't studied the various releases.  However, unless I have
misunderstood, the various releases don't address the issue of
administration, which can be a hindrance for the acceptance of Linux (or
Unix in general).  I guess I should take a look at all the releases
before claiming that status quo is not good enough.

>I confess, I don't understand why some people are
>unhappy about it, except, of course, for the problem of disorganised
>ftp sites -- including mine.

Disorganized FTP sites is a large part of the problem.  Lacking or
difficult to find information is another large part.  It is also
possible that there are too many releases, it is too difficult to
choose.  If one release (which contained most everything, but would be
easily configurable to contain just the things needed) were promoted as
the official one, it would certainly make it easier.

--
Lars.Wirzenius@helsinki.fi

Path: sparky!uunet!olivea!mintaka.lcs.mit.edu!bloom-picayune.mit.edu!daemon
From: ty...@ATHENA.MIT.EDU (Theodore Ts'o)
Newsgroups: comp.os.linux
Subject: Re: Stabilizing Linux
Message-ID: <1992Aug9.192757.27571@athena.mit.edu>
Date: 9 Aug 92 19:27:57 GMT
Sender: dae...@athena.mit.edu (Mr Background)
Reply-To: ty...@ATHENA.MIT.EDU (Theodore Ts'o)
Organization: The Internet
Lines: 55

There has been an awful lot of discussions about ways to "Stablize
Linux" --- whether this is good or bad --- the need for an "Official
Release", and talk about a "Linux Committee".  I've seen some stuff I
agree with, and some stuff that I disagree with.  Not surprisingly, the
people with whom I agree with tended to be the people who were putting a
lot of hard work into Linux already --- and the people whom I disagreed
with were people whom, by and large, their names I did not recognize as
Linux developers.

You should all keep in mind that the people who have been putting in
their time to make Linux better do so out of a labor of love; Linux is
freeware, and it is relatively rare for freeware to be of the quality
that Linux already is.  I hear a lot of calls about ways people think
Linux should be made better; but what I don't hear is anyone
volunteering to actually do the work.  

This is why a Linux committee would not terribly productive; if it's
composed of people who do nothing more than demand that volunteers spend
even more of their time working on it, so that it becomes "acceptable"
--- it won't work, and it will only breed resentment on both sides.  If
it is composed of those who are actually doing the work --- well, those
who are already doing the work currently have the say about what their
work produces; why have a committee to formalize such things?

What we're seeing is the results of Linux's success.  It has proven to
be so successful that people are forgetting that it is freeware, and
STILL IN BETA TEST; that it is rare for you to even get what you pay
for, let alone exceed what you paid for it (which is what Linux has
clearly done).  Instead, people are assuming that it should be a
turn-key system; and Someone should put together better system
administration tools than what currently exists on the market.

Guys (and Gals) ---- release engineeering is hard work, and in general
no fun.  You should be thanking those people who have been putting
together the MCC release, the TAMU release, or the MJ release, not
complaining about how the maintainers should be putting even more of
their free time into it.  If you're not satisifed with the quality of
those releases, put one together yourself!  If it's better than all the
others, everyone will start using it.

As for system administration tools --- that is still an unsolved
problem; I have yet to see a general purpose system administrator's tool
that works in all environments; handles everything that a potential
sysadmin might want to do; and doesn't get in the way of an experienced
administrator.  Why are people demanding that Linux provide a solution
to an unsolved research problem?

In summary, if you're not satisfied with how Linux is progressing, put
in the time and effort to improve the areas you are complaining about.
If you can't do that, rest assured that many of us are aware of the
shortcomings of Linux, and some of us are wondering how to find the time
to address the deficiency.  Flaming about it, however, isn't going to
help.  Talk is cheap; actually doing something about it is harder.

					- Ted

From: jwinstea@jarthur.claremont.edu (Jim Winstead Jr.)
Subject: Re: Stabilizing Linux
Date: 9 Aug 92 23:56:19 GMT

In article <1992Aug9.192757.27571@athena.mit.edu> 
tytso@ATHENA.MIT.EDU (Theodore Ts'o) writes:
>
>This is why a Linux committee would not terribly productive; if it's
>composed of people who do nothing more than demand that volunteers spend
>even more of their time working on it, so that it becomes "acceptable"
>--- it won't work, and it will only breed resentment on both sides.  If
>it is composed of those who are actually doing the work --- well, those
>who are already doing the work currently have the say about what their
>work produces; why have a committee to formalize such things?

One quite big point that you forgot:  Getting a committee of people to
agree on anything in a reasonable timespan is damned near impossible.
The Linux Standards Committee is the best example of this - not a
single *final* draft has been issued forth, and even the rough drafts
have huge problems with them that nobody seems willing to discuss.

If a committee is formed, decisions will still be ultimately made by
individuals.

>As for system administration tools --- that is still an unsolved
>problem; I have yet to see a general purpose system administrator's tool
>that works in all environments; handles everything that a potential
>sysadmin might want to do; and doesn't get in the way of an experienced
>administrator.  Why are people demanding that Linux provide a solution
>to an unsolved research problem?

Better yet, why are people demanding it without doing anything about
it?  I still have not seen a good 'adduser' utility for Linux - a
crude one (meaning no fancy curses interface) shouldn't take more than
a couple of evening programming sessions to write.  A user-friendly
one (read: fancy curses interface) shouldn't be that much harder.

As it is, the only adduser tool I've seen is one that claims to
have been written during the commericals of a TV show, and it looks
like it.  Hell, even the passwd/chsh programs out there aren't the
greatest.

A fancy sysadmin tool that did everything but make coffee would be
great, but we don't even have decent simple admin tools.

>in summary, if you're not satisfied with how Linux is progressing, put
>in the time and effort to improve the areas you are complaining about.

I couldn't have said it better myself, and I'm surprised it's taken
this long for someone to say it.

-- 
                                    +    Jim Winstead Jr. (CSci '95)
                                    |            Harvey Mudd College
                                    | jwinstea@jarthur.Claremont.EDU
                                    + This is all my words.  Honest!

From: jhelberg@nl.oracle.com (Joost Helberg)
Subject: Re: Stabilizing Linux
Date: 10 Aug 92 08:26:56 GMT


Stabilizing Linux is not needed: it is far more stable than Unix's
from $100 to $4000.

Stabilizing Linux's development is not needed too! It means calling the
rapid development a halt, which is no good.

What do we need to stabilize?

Example: de we need to stabilize the GNU development of utilities? (i.e.
fileutils, shellutils etc.)

NO.

They come around every now and then, it is always clear why a new
release is put forward and the quality is good.

This is not disturbing to normal users (the ones with only two fingers
and two left hands), becaus they do not compile sources etc..

Our (software people) job is to lead the stream of sources/products to 
these people/customers and to give them usable products. It is the
developer/porter who chooses which update come in and which doesn't.

The important things about projects like LINUX are:
  1) It must be clear what changes are made between releases in order
     to make the decision to use or not to use the update.
    
  2) every new release must be available, totally and if possible in
     diff, on easy to reach places. Mailservers etc..

  3) backwards compatibility is pretty important, but if performance or
     functionality is violated, those prevail. e.g. the new extfs differs
     from the old, so why not implement it as a second seperate file-system,
     and not supporting the old extfs?

Let the worlds software professionals put things together and let them
sell their skills in stead of selling software!

Take e.g. a customer: an office with 10 people working in an administrative
environment. These people need editing, text-processing and some basic
database work.

Offer them a Linux system with emacs/TeX/vi etc. and public domain database
stuff and you're ready for less  than $10.000 worth of software/hardware.

Then the task of the software professional starts: installing, merging, 
implementing ad hoc solutions and keeping up with new releases.

Fun! And (for the price) exceptionally attractive for customers.

--
   Joost Helberg                                Rijnzathe 6
   jhelberg@oracle.nl                           NL-3454 PV De Meern
   jhelberg@nl.oracle.com                       The Netherlands

   Oracle Europe BV                             Product Line Development        
   Phone: +31 3406 94211                        Fax:   +31 3406 65609

From: wirzeniu@klaava.Helsinki.FI (Lars Wirzenius)
Subject: Re: Stabilizing Linux
Date: 10 Aug 92 14:49:28 GMT

jhelberg@nl.oracle.com (Joost Helberg) writes:
>Stabilizing Linux is not needed: it is far more stable than Unix's
>from $100 to $4000.

Possibly.  But is it stable enough?

BTW, with stable I've been talking about a system that doesn't change
every day, or if it does, that it doesn't require the user to update his
system.  I don't mean a system that doesn't change.

>Stabilizing Linux's development is not needed too! It means calling the
>rapid development a halt, which is no good.

This is not what I proposed.

>What do we need to stabilize?

We need to stabilize a complete usable system, and a few application
programs for it.  This does not need that their development is going to
stop, it means that they need to be released in such a way that user's
can just get them and install them and use them and then not have to do
anything about the system for a while.

>This is not disturbing to normal users (the ones with only two fingers
>and two left hands), becaus they do not compile sources etc..

No, they usually don't compile sources, but they definitely do install
things on their computers.  Which, currently for Linux often means
compiling sources.

>Our (software people) job is to lead the stream of sources/products to 
>these people/customers and to give them usable products. It is the
>developer/porter who chooses which update come in and which doesn't.


>The important things about projects like LINUX are:
>  1) It must be clear what changes are made between releases in order
>     to make the decision to use or not to use the update.
>    
>  2) every new release must be available, totally and if possible in
>     diff, on easy to reach places. Mailservers etc..
>
>  3) backwards compatibility is pretty important, but if performance or
>     functionality is violated, those prevail. e.g. the new extfs differs
>     from the old, so why not implement it as a second seperate file-system,
>     and not supporting the old extfs?

I agree.

>Let the worlds software professionals put things together and let them
>sell their skills in stead of selling software!

I agree, although this is a flammatory topic.  (Followups on this part
should be taken to gnu.misc.discuss or something.)

> [ software professional installs and administers and updates system
>   for corporate customer ]

That is one scenario.  What about people who want to use Unix at home,
and don't want to pay for a consultant to come and install it?  They can
buy a commercial Unix and install it more or less without pain, why
should Linux be more difficult?  (Note: It reportedly is not that
difficult even as it is.)

--
Lars.Wirzenius@helsinki.fi

From: wirzeniu@klaava.Helsinki.FI (Lars Wirzenius)
Subject: Re: Stabilizing Linux
Date: 10 Aug 92 15:24:51 GMT

tytso@ATHENA.MIT.EDU (Theodore Ts'o) writes:
>There has been an awful lot of discussions about ways to "Stablize
>Linux" --- whether this is good or bad --- the need for an "Official
>Release", and talk about a "Linux Committee".  I've seen some stuff I
>agree with, and some stuff that I disagree with.  

I have not proposed a committee, only an "official release".  I was
trying to convey an opinion, or perhaps a dream, of how Linux could
(perhaps should) be from a user's point of view.  From a user's point of
view, especially from a confused newbie's point of view, it is better to
have one "official release" than several different releases.  IMHO, of
course.

I doubt that a committee is the way to go, the current method of
discussing things more or less openly and then seeing what the various
volunteers do seems to work fairly well.  Which is what at least I was
trying to do (albeit in too much of an authoritative tone, I'm afraid). 

>You should all keep in mind that the people who have been putting in
>their time to make Linux better do so out of a labor of love; Linux is
>freeware, and it is relatively rare for freeware to be of the quality
>that Linux already is.  I hear a lot of calls about ways people think
>Linux should be made better; but what I don't hear is anyone
>volunteering to actually do the work.  

A direct hit: I haven't done vary much, but that will hopefully change
in a month or so.  I need to get some for-money work out of the way
first.

(Not that I see anything wrong in suggesting ways to improve Linux, even
if you can't afford to do it yourself.  This is different from requiring
that others do the work for you.)

>This is why a Linux committee would not terribly productive; if it's
>composed of people who do nothing more than demand that volunteers spend
>even more of their time working on it, so that it becomes "acceptable"
>--- it won't work, and it will only breed resentment on both sides.  

I agree that there is no justification on requiring any volunteer to do
anything, and I hope what I have said didn't sound like that.  I wanted
to express a hope what somebody (possibly even myself) would do, if and
when they find the time and resources, and also express an opinion on
how the future of Linux should look like. 

>What we're seeing is the results of Linux's success.  It has proven to
>be so successful that people are forgetting that it is freeware, and
>STILL IN BETA TEST; that it is rare for you to even get what you pay
>for, let alone exceed what you paid for it (which is what Linux has
>clearly done).  Instead, people are assuming that it should be a
>turn-key system; and Someone should put together better system
>administration tools than what currently exists on the market.

While it's still in beta, I was thinking about what it should look like
when it no more is.  A turn-key system is more or less exactly what I
think it should be, from an ordinary user's (non-Linux hacker's) point
of view.

>Guys (and Gals) ---- release engineeering is hard work, and in general
>no fun.  

I know, and I said as much.

>You should be thanking those people who have been putting
>together the MCC release, the TAMU release, or the MJ release, not
>complaining about how the maintainers should be putting even more of
>their free time into it.  If you're not satisifed with the quality of
>those releases, put one together yourself!  If it's better than all the
>others, everyone will start using it.

I wasn't saying they should put more time into it, and I wasn't
complaining about their work either, at least not directly.  It is just
that my personal impression of the current state of Linux (and the
probable state for 1.0) is (or was) that things were not as well as they
could be, and was trying to raise a bit of discussion of this.  I may
have been mistaken of the qualities of the various releases, I need to
examine them as soon as possible.

>As for system administration tools --- that is still an unsolved
>problem; I have yet to see a general purpose system administrator's tool
>that works in all environments; handles everything that a potential
>sysadmin might want to do; and doesn't get in the way of an experienced
>administrator.  Why are people demanding that Linux provide a solution
>to an unsolved research problem?

Because it is needed for J. Random Luser to be able to run Linux on his
home computer.

However, I do not think a complete automatization of administration is
needed, nine tenths of the job should be doable by writing programs that
walk the user through basic tasks such as adding new users, setting up
cron jobs, making backups, etc.  Having a program to do these things is
much more easier for an ordinary user than having to edit various text
files with an editor.  I am sure there are programs for this already
floating around.

>In summary, if you're not satisfied with how Linux is progressing, put
>in the time and effort to improve the areas you are complaining about.

I'll do exactly that as soon as I get the time.

>If you can't do that, rest assured that many of us are aware of the
>shortcomings of Linux, and some of us are wondering how to find the time
>to address the deficiency.  Flaming about it, however, isn't going to
>help.  Talk is cheap; actually doing something about it is harder.

I sincerely assure that I did not intend to flame.  I fear that it is
perceived as such, and that this thread is going in that direction, so I
will try to be more careful in expressing myself.

--
Lars.Wirzenius@helsinki.fi

From: davidsen@ariel.crd.GE.COM (william E Davidsen)
Newsgroups: comp.os.linux
Subject: Re: Stabilizing Linux
Date: 10 Aug 92 15:38:56 GMT
Reply-To: davidsen@crd.ge.com (bill davidsen)
Organization: GE Corporate R&D Center, Schenectady NY
Nntp-Posting-Host: ariel.crd.ge.com

In article < sheldon.713197562@pv141b.vincent.iastate.edu>, 
sheldon@iastate.edu (Steve Sheldon) writes:

|  Yes, we need to specify what features would be nice in this v1.0.
| And then not deviate too much from the feature list, just perform
| bug fixes, to help stablize it.

  Features are chosen on the "one man, one vote" principle, and the one
man is Linus. I need some IPC stuff which isn't there, but other than
one note to him explaining which features I need to port, I'm not going
to bug him, because I'm unwilling to implement the features.

|  Obviously, people should be encouraged to work on new featureful
| ideas, but keep these as a parallel development to be included in
| v1.1, or whatever.
| 
|  I'd like to see something out by Jan93, maybe.  I think it advisable
| that we not get into a situation like Commodore's AmigaDOS 2.04.  This
| started out as a v1.4 which will be out by Spring '89, and then into
| a v2.0 for final release in '90, and ended up as 2.04 and didn't get
| released to the general public until fall '91.  While I was not an
| Amiga developer, it sounded like it had a creeping feature syndrome, and
| they kept adding stuff and testing became quite a burden.

  Is that a statement that you are going to start writing the docs,
making the disks, etc?

|  The problem with this scenario, is that while people are waiting and waiting
| for the stabilized release, they are more likely to loose interest, and go
| do something else.  I'd like to bait & hook a few more people, and show
| them the wonders of *nix, which I am just now getting to know myself.

  Getting people to try it will take a lot more than a few notes and a
handful of disk images. I find that Linux is really a pain to use, and I
have ftp access and a pretty good idea of how unix works. The DOS user
hasn't a clue. I tried to show Linux to some people, and they said "let
me look at the manual." That's the problem right now.

| >After the kernel has been tested thoroughly, for a few weeks or
| >so, and all the major bugs have been fixed, we release it as 1.0. 
| >Then we consider this as the baseline, and create one official
| >release that is easy to install, easy to administer, contains
| >everything necessary and a lot of unnecessary but probably widely
| >useful things and package everything neatly.  
| 
|  test...test...test...  It's a tough job, but necessary.

  It's unavoidable just using the system.

|  Yes, most definitely.  A package which all you need to do is stick a 
| boot disk into the machine, and walk you thru the install.  Much like
| any of these commercial Unixes, SCO, Dell, etc.
|  I'd love to see a full package which included the administration utilities,
| the documentation, manual pages, networking software, X386, etc.  Only
| install what you want/need.

  Boy, so would I, and when you write that manual and get it all
packaged, I'll take a copy.

|  I'd also like to see as part of this "release" a couple of "manuals"
| which the user could print out from the DOS side.  Something that
| is nicely formatted, and ready to copy to the printer.  An "installation
| guide", and a "user's guide".  The installation guide would go step
| by step thru the installation, and the user's guide would describe just
| what is included in the package, and go over some of the basics of unix, 
| as well as basic setup and administration.  There are thousands of pages
| already written on Unix, and I don't expect we could cover everything.
| But certainly the basics, and the things specific to Unix.

  Let me say a word about documentation. I ahve written a number of
things which have gone out through source groups, all the way back to
mod.sources. During that time I wrote a user's guide to zoo, the
archiver. The guide is about 35 pages long, and took almost as long to
write a a major program. I have had a total of two people drop me a line
to tell me they liked it, and few if any archive sites carry the manual
even though they have the software.

  Given that documentation is less fun to write than software, brings no
recognition whatsoever, and takes longer page for page than code, you
need to find someone who really wants to write documentation, or it's
not going to happen. If there's someone who is a good writer and a
complete loss as a programmer, and who wants to really contribute to the
effort, that's the person who should do it. And all the postings in the
world about what we need won't help unless you can find someone to do
it.

-- 
bill davidsen, GE Corp. R&D Center; Box 8; Schenectady NY 12345
    I admit that when I was in school I wrote COBOL. But I didn't compile.

From: davidsen@ariel.crd.GE.COM (william E Davidsen)
Newsgroups: comp.os.linux
Subject: Re: Linux CDROM (Was stabilizing Linux)
Date: 10 Aug 92 18:46:23 GMT
Reply-To: davidsen@crd.ge.com (bill davidsen)
Organization: GE Corporate R&D Center, Schenectady NY
Nntp-Posting-Host: ariel.crd.ge.com

In article <1992Aug9.003422.15656@athena.mit.edu>, f6930910@scheme.cs.ubc.ca writes:

| To become an operating system, Linux needs to look like Berkeley's
| Net-2 tapes, or the USL source tree, or the VMS source tree, or
| any other complete system.  This will be a major undertaking
| (seeing to it that there are manuals for everything would alone
| be a major undertaking).

  A major problem, as I've noted before. Right now I have lots of files
I've ftp'd, handwritten notes on 3x5 cards, and my memory. Not a nice
clean bit of documentation!

  A good reason to buy a CD_ROM drive, though, instead of reading stuff
off one at work and bringing it home.
-- 
bill davidsen, GE Corp. R&D Center; Box 8; Schenectady NY 12345
    I admit that when I was in school I wrote COBOL. But I didn't compile.

From: hargrove@theory.TC.Cornell.EDU (Paul H. Hargrove)
Subject: Adduser program (was Re: Stabilizing Linux)
Date: Mon, 10 Aug 1992 21:15:27 GMT

In article <1992Aug9.235619.15106@muddcs.claremont.edu> 
jwinstea@jarthur.claremont.edu (Jim Winstead Jr.) writes:
[stuff deleted]
>
>Better yet, why are people demanding it without doing anything about
>it?  I still have not seen a good 'adduser' utility for Linux - a
>crude one (meaning no fancy curses interface) shouldn't take more than
>a couple of evening programming sessions to write.  A user-friendly
>one (read: fancy curses interface) shouldn't be that much harder.
>
>As it is, the only adduser tool I've seen is one that claims to
>have been written during the commericals of a TV show, and it looks
>like it.  Hell, even the passwd/chsh programs out there aren't the
>greatest.
>
[more deleted]

Ok, I'll bite.
I've been looking for a new Linux project, so I'll volunteer to write
a good 'adduser' utility.  I'll start with the 'crude one', and perhaps
progress on to the 'fancy curses interface'.
Everyone who has wants/don't wants for such a tool please mail them to
me.

>-- 
>                                    +    Jim Winstead Jr. (CSci '95)
>                                    |            Harvey Mudd College
>                                    | jwinstea@jarthur.Claremont.EDU
>                                    + This is all my words.  Honest!

-- 
Paul H. Hargrove 
hargrove@theory.tc.cornell.edu 
"A witty saying proves nothing." --Voltaire

From: eric@tantalus.dell.com (Eric Youngdale)
Newsgroups: comp.os.linux
Subject: Re: Linux CDROM (Was stabilizing Linux)
Date: 10 Aug 92 21:17:04 GMT
Organization: Naval Research Laboratory

In article <1992Aug10.184623.1572@crd.ge.com> davidsen@crd.ge.com 
(bill davidsen) writes:
>In article <1992Aug9.003422.15656@athena.mit.edu>, f6930910@scheme.cs.ubc.ca 
writes:
>
>| To become an operating system, Linux needs to look like Berkeley's
>| Net-2 tapes, or the USL source tree, or the VMS source tree, or
>| any other complete system.  This will be a major undertaking
>| (seeing to it that there are manuals for everything would alone
>| be a major undertaking).

	I do not see an easy way to fix the distribution, without a *lot* of
work.  One of the main advantages of linux is that it will run on a minimal
system, say 2Mb ram, and 10 Mb disk.  Would a standard distribution contain
Emacs?  X11? MGR?  Which X utilities?  Would we include GCC, or would this be
an add on for those who do programming?  How about TeX?  Right now we have a
modular approach which makes it easy to add the tools that you want.
Unfortunately this also means that new users have a hard time figuring out
what is going on.

>
>  A major problem, as I've noted before. Right now I have lots of files
>I've ftp'd, handwritten notes on 3x5 cards, and my memory. Not a nice
>clean bit of documentation!

	Yes, documentation is a real weakness, right now, and  I am not sure
how best to fix it.  Writing documentation has about as much sex appeal
as a road accident, so I cannot see it getting much better, anytime soon.
Of course, if someone wants to work on this....

	Seriously, if linux really catches on, I would expect someone to start
writing a book about it.  Right now linux is really only readily availible to
people who have net access of some kind (A bbs distribution would seem to me
to be too painful to contemplate).  An author would need a guarantee that new
users could have ready access to linux, and a CDROM would do this.  A package
that included a CDROM with the book would give a new user everything they need.
Right now the rate of change is still too high for a book to make sense, but
once things settle down a little more it would be a more realistic project.

>  A good reason to buy a CD_ROM drive, though, instead of reading stuff
>off one at work and bringing it home.

	I agree.  I have the Simtel20 disc that Walnut Creek also produces, and
I will never ftp to that site again.  Ftp is just too slow, and browsing can be
an agony.  With the disc, I can grep all of the readme files, or pluck out one
file that I need.  I can fire up Emacs on the master index and search for
specific things.  Something look like it might be interesting?  Unpack it into
/tmp and see.  Come to think of it, I did see something on the simtel disc that
was essentially like mtools, except that it runs from dos and can read a minix
partition.  Without the ability to browse, I probably would have never found
it.  It needs a little work, but it is there for anyone who wants it.

	With the linux disc, I would suspect that all of the FAQ and c.o.l.
archives will be on the disc, without taking up precious hard disk space.  A
shared library update would mean just plucking the source for an application
off of the cdrom, recompile and then delete the sources from the hard disk.

-Eric
--
Eric Youngdale
eric@tantalus.nrl.navy.mil

From: erc@unislc.uucp (Ed Carp)
Subject: Re: Stabilizing Linux
Date: Mon, 10 Aug 1992 22:14:12 GMT

wirzeniu@klaava.Helsinki.FI (Lars Wirzenius) writes:

: While it's still in beta, I was thinking about what it should look like
: when it no more is.  A turn-key system is more or less exactly what I
: think it should be, from an ordinary user's (non-Linux hacker's) point
: of view.

Shit -- this guy is no fun!  :)  What would linux be without 2 AM
kernel patches? :)  Or 3 different getty programs, shoelace, lilo, and
4 different versions of the kernel to play with? :)

The reason I hate "turn-key" systems is that they lock you into one
solution.

: Because it is needed for J. Random Luser to be able to run Linux on his
: home computer.

I don't think this is a system that the typical MS-DOS user can install
without a bit of knowing what's going on under the hood ... nor will it
be, at least in the near future.
-- 
Ed Carp, N7EKG     erc@apple.com                801/538-0177
"This is the final task I will ever give you, and it  goes  on  forever.   Act
happy, feel happy, be happy, without a reason in the world. Then you can love,
and do what you will."           -- Dan Millman, "Way Of The Peaceful Warrior"

From: hlu@phys1.physics.wsu.edu (Hongjiu Lu)
Subject: Re: Adduser program (was Re: Stabilizing Linux)
Date: 10 Aug 92 22:28:44 GMT

Do we have to reinvent the wheel? We can borrow something from bsd on 
this. As for curses, I wrote a program years ago, which was for a very
simple database and could take personal info from curses, like address,
phone, names, ssn and etc. It is not that hard to write. I also had another
fancy curses interface for graphics. It had several level menus to control the
behavior of graphics.

Is there a tool kit for curses?

-- 
H.J.
Gcc/libc maintainer for Linux.
----
In article < 1992Aug10.211527.11011@tc.cornell.edu>, 
hargrove@theory.TC.Cornell.EDU (Paul H. Hargrove) writes:
|> In article < 1992Aug9.235619.15106@muddcs.claremont.edu> 
|> jwinstea@jarthur.claremont.edu (Jim Winstead Jr.) writes:
|> [stuff deleted]
|> >
|> >Better yet, why are people demanding it without doing anything about
|> >it?  I still have not seen a good 'adduser' utility for Linux - a
|> >crude one (meaning no fancy curses interface) shouldn't take more than
|> >a couple of evening programming sessions to write.  A user-friendly
|> >one (read: fancy curses interface) shouldn't be that much harder.
|> >
|> >As it is, the only adduser tool I've seen is one that claims to
|> >have been written during the commericals of a TV show, and it looks
|> >like it.  Hell, even the passwd/chsh programs out there aren't the
|> >greatest.
|> >
|> [more deleted]
|> 
|> Ok, I'll bite.
|> I've been looking for a new Linux project, so I'll volunteer to write
|> a good 'adduser' utility.  I'll start with the 'crude one', and perhaps
|> progress on to the 'fancy curses interface'.
|> Everyone who has wants/don't wants for such a tool please mail them to
|> me.
|> 
|> >-- 
|> >                                    +    Jim Winstead Jr. (CSci '95)
|> >                                    |            Harvey Mudd College
|> >                                    | jwinstea@jarthur.Claremont.EDU
|> >                                    + This is all my words.  Honest!
|> 
|> -- 
|> Paul H. Hargrove 
|> hargrove@theory.tc.cornell.edu 
|> "A witty saying proves nothing." --Voltaire


From: kennu@mits.mdata.fi (Kenneth Falck)
Subject: Re: Stabilizing Linux
Date: Tue, 11 Aug 1992 00:23:10 GMT

In article <1992Aug9.192757.27571@athena.mit.edu> 
tytso@ATHENA.MIT.EDU (Theodore Ts'o) writes:
>This is why a Linux committee would not terribly productive; if it's
>composed of people who do nothing more than demand that volunteers spend
>even more of their time working on it, so that it becomes "acceptable"
>--- it won't work, and it will only breed resentment on both sides.  If
>it is composed of those who are actually doing the work --- well, those
>who are already doing the work currently have the say about what their
>work produces; why have a committee to formalize such things?

My idea of a "linux committee" would be a group of people, who
would maintain some sort of a list of programs and versions
that are available for linux. I don't think there'd be any need
for "acceptance" of programs, just maintaining a list of all
the utilities available for linux that you have to download
to get the system working. There are of course these collection
releases, but I believe none of them contains EVERYTHING, and
checking out the ftp-sites isn't fun to do, when you can't get
a simple list of the programs with comments.

I dunno, maybe this kind of a list wouldn't help anything
in practice. But it could help find your way in the ftp-jungle.

But I think it should be maintained by some volunteer that
would really knows Unix, e.g. to be able to easily mark the
programs that belong to the "standard" Unix command set.

What comes to hacking, I'm pretty sure I'm one of the people
whose names you don't recognize as a Linux developer. True,
I've only made a few programs for my own use, as I don't know
very much about porting stuff or kernel programming, and the
development environment of Linux still feels a little bit
unstable; GCC sometimes goes wacko under heavy load. I hope
this still doesn't disallow me to write these creative articles :-)

-- 
kennu@mits.mdata.fi
<here's where the dumb quote is supposed to be>

Newsgroups: comp.os.linux
From: erc@unislc.uucp (Ed Carp)
Subject: Re: Linux CDROM (Was stabilizing Linux)
X-Newsreader: Tin 1.1 PL4
Organization: Unisys Corporation SLC
Date: Tue, 11 Aug 1992 00:43:25 GMT

eric@tantalus.dell.com (Eric Youngdale) writes:

: 	Seriously, if linux really catches on, I would expect someone to start
: writing a book about it.  Right now linux is really only readily availible to

I've already got one started. :)  As the kernel evolves, I'm adding new
sections.  "The Design of the Linux Operating System" :)  Snappy title, huh?? :)

: people who have net access of some kind (A bbs distribution would seem to me
: to be too painful to contemplate).  An author would need a guarantee that new
: users could have ready access to linux, and a CDROM would do this.  A package
: that included a CDROM with the book would give a new user everything they need.

Make some pocket change by offering it for sale.  CD-ROM or diskettes. :)

: >  A good reason to buy a CD_ROM drive, though, instead of reading stuff
: >off one at work and bringing it home.

The problem is that it's changing so damned *fast*!

: 	With the linux disc, I would suspect that all of the FAQ and c.o.l.
: archives will be on the disc, without taking up precious hard disk space.  A
: shared library update would mean just plucking the source for an application
: off of the cdrom, recompile and then delete the sources from the hard disk.

Is this for R/W CD-ROM drives?  Linux is changing so fast that even *doing*
a CD-ROM right now doesn't make a whole lot of sense, unless you have
rewriteable optical capability.
-- 
Ed Carp, N7EKG     erc@apple.com                801/538-0177
"This is the final task I will ever give you, and it  goes  on  forever.   Act
happy, feel happy, be happy, without a reason in the world. Then you can love,
and do what you will."           -- Dan Millman, "Way Of The Peaceful Warrior"


From: wirzeniu@klaava.Helsinki.FI (Lars Wirzenius)
Subject: Re: Stabilizing Linux
Date: 11 Aug 92 12:42:11 GMT

erc@unislc.uucp (Ed Carp) writes:
>Shit -- this guy is no fun!  :)  

And I wasn't even trying to tell a joke.  :)  (Linus is probably all too
glad to warn people about my jokes. :)

>What would linux be without 2 AM kernel patches? :) Or 3 different getty
>programs, shoelace, lilo, and 4 different versions of the kernel to play
>with? :)

Probably dead.  I have no intention that this changes, but I don't think
anybody is claiming that this is the ideal environment in which real
work should be made (no, I don't mean that you should use current Linux
for real work when it is still officially in beta).

>The reason I hate "turn-key" systems is that they lock you into one
>solution.

Not necessarily, but I think that as long as the solution they offer is
good enough for most people it is a good thing to have such a solution. 
And different solutions just for the sake of differences is seldom a
good thing.

>I don't think this is a system that the typical MS-DOS user can install
>without a bit of knowing what's going on under the hood ... nor will it
>be, at least in the near future.

Probably not, but there are a lot of "knowledgeable users" who might be
interested in trying out Linux, but are not interested in spending a few
weeks trying to get all the necessary things together (or finding out
about the various releases).  Another potential group of real users are
people who use a Unix workstation at work or in school and want to run
some kind of Unix at home as well.  They may be advanced users of Unix
even though they have no adminstration experience.

--
Lars.Wirzenius@helsinki.fi

From: eric@tantalus.dell.com (Eric Youngdale)
Newsgroups: comp.os.linux
Subject: Re: Linux CDROM (Was stabilizing Linux)
Date: 11 Aug 92 14:35:35 GMT
Organization: Naval Research Laboratory

In article <1992Aug11.004325.9409@unislc.uucp> erc@unislc.uucp (Ed Carp) writes:
>: 	With the linux disc, I would suspect that all of the FAQ and c.o.l.
>: archives will be on the disc, without taking up precious hard disk space.  A
>: shared library update would mean just plucking the source for an application
>: off of the cdrom, recompile and then delete the sources from the hard disk.
>
>Is this for R/W CD-ROM drives?  Linux is changing so fast that even *doing*
>a CD-ROM right now doesn't make a whole lot of sense, unless you have
>rewriteable optical capability.

	It is true that linux is changing fast, but we have to keep in mind
that linux is quite usable as it is right now for many applications.  It
appears as if the libraries have settled down quite a bit (back in june we were
getting a new version of gcc every week).  It is true that there are a number
of kernel changes in the works, but we have to keep in mind that a lot of them
are fairly fine points.  We could, for example, live with the fixed size
buffer cache, if we needed to, but there are performance advantages to
the dynamically sized buffer cache and variable size buffers.  Similarily
Linus is talking about allowing a 3.75 Gb/process address space, increased
from the 64Mb that we currently have.  The vast majority of people do not need
more than 64Mb - those that do will have to wait.

	We also have to keep in mind that the linux/etc. cdrom will be
periodically updated, so we are by no means locked in to one particular
version.  Also, if anyone is distributing a CDROM with a book, or whatever,
they could also in principle distribute a floppy with the latest kernel,
if they felt the need.

	I know very little about the R/W optical discs.  I believe that you
need a different drive, and a different type of disc.

--
Eric Youngdale
eric@tantalus.nrl.navy.mil

From: tytso@ATHENA.MIT.EDU (Theodore Ts'o)
Subject: Re: Stabilizing Linux
Reply-To: tytso@ATHENA.MIT.EDU (Theodore Ts'o)
Date: Tue, 11 Aug 1992 19:48:44 GMT

   From: kennu@mits.mdata.fi (Kenneth Falck)
   Date: Tue, 11 Aug 1992 00:23:10 GMT

   My idea of a "linux committee" would be a group of people, who
   would maintain some sort of a list of programs and versions
   that are available for linux. I don't think there'd be any need
   for "acceptance" of programs, just maintaining a list of all
   the utilities available for linux that you have to download
   to get the system working. There are of course these collection
   releases, but I believe none of them contains EVERYTHING, and
   checking out the ftp-sites isn't fun to do, when you can't get
   a simple list of the programs with comments.

That's fine.  But you don't need a committee to do that.  You need one
person who is willing to be the point person and actually coordinate the
list of programs and such which are available for Linux.  Be advised
that this will be a fairly time-consuming task, considering how fast
Linux is changing.  I don't think that having a committee would help,
particularily; they would probably end up getting in each other's way.

                                                        - Ted

Newsgroups: comp.os.linux
From: dje@sspiff.ampr.ab.ca (Doug Evans)
Subject: Re: Stabilizing Linux
Organization: Edmonton, Alberta
Date: Tue, 11 Aug 1992 20:22:05 GMT
X-Newsreader: Tin 1.1 PL4

davidsen@ariel.crd.GE.COM (william E Davidsen) writes:
>  Let me say a word about documentation. I ahve written a number of
>things which have gone out through source groups, all the way back to
>mod.sources. During that time I wrote a user's guide to zoo, the
>archiver. The guide is about 35 pages long, and took almost as long to
>write a a major program. I have had a total of two people drop me a line
>to tell me they liked it, and few if any archive sites carry the manual
>even though they have the software.
>
>  Given that documentation is less fun to write than software, brings no
>recognition whatsoever, and takes longer page for page than code, you
>need to find someone who really wants to write documentation, or it's
>not going to happen. If there's someone who is a good writer and a
>complete loss as a programmer, and who wants to really contribute to the
>effort, that's the person who should do it. And all the postings in the
>world about what we need won't help unless you can find someone to do
>it.

Why not try to use documentation that already exists (where possible, of
course)? Why keep reinventing the wheel just for the sake of reinventing the
wheel???

For example, I can buy a whole suite of manuals on SVR4/386 from Prentice Hall
for about $300. Yes yes, that's pretty steep, but what are the realistic
alternatives?

Suppose Linux was heading to where SVR4/386 already is: most of our manuals
would already be written for us. And we would rarely have to cope with Linux
quirks when porting the various applications. And we would have a roadmap of
where we were going.

I know we don't want to do this, but as a hacker, I often (though not always)
like hacking on new things, not redoing existing things just so I can say
"I did it my way".
-- 
Doug Evans               | "You're just supposed to sit here?"
dje@sspiff.ampr.ab.ca    |               - Worf in a mud bath.

From: davidsen@ariel.crd.GE.COM (william E Davidsen)
Newsgroups: comp.os.linux
Subject: Re: Linux CDROM (Was stabilizing Linux)
Date: 11 Aug 92 20:46:32 GMT
Reply-To: davidsen@crd.ge.com (bill davidsen)
Organization: GE Corporate R&D Center, Schenectady NY
Nntp-Posting-Host: ariel.crd.ge.com

In article <3274@ra.nrl.navy.mil>, eric@tantalus.dell.com (Eric Youngdale) writes:

| >  A good reason to buy a CD_ROM drive, though, instead of reading stuff
| >off one at work and bringing it home.
| 
| 	I agree.  I have the Simtel20 disc that Walnut Creek also produces, and
| I will never ftp to that site again.  Ftp is just too slow, and browsing can be
| an agony.  With the disc, I can grep all of the readme files, or pluck out one
| file that I need.  I can fire up Emacs on the master index and search for

  Are these the people who said they'd give a free disk to contributors
of the software? I wrote to then about that, because I was moderator of
cbip for three years, and push at least 30-40MB of those contributions
out my modem on their way to Simtel20. I guess it only applies to
/authors/ of the software, though, because I never got a disk from them.

  Since compressed data works as well on CD-ROM as anywhere else, I
think there's room for lots of stuff on that platter, maybe the
source.misc and source.unix, or simtel20 unix collection, etc. Ithink
all of Linux stuff in a good compressed format would fit in 50MB,
leaving lots of room for other stuff.
-- 
bill davidsen, GE Corp. R&D Center; Box 8; Schenectady NY 12345
    I admit that when I was in school I wrote COBOL. But I didn't compile.

From: eric@tantalus.dell.com (Eric Youngdale)
Newsgroups: comp.os.linux
Subject: Re: Linux CDROM (Was stabilizing Linux)
Date: 11 Aug 92 21:39:51 GMT
Organization: Naval Research Laboratory

In article <1992Aug11.204632.11714@crd.ge.com> davidsen@crd.ge.com 
(bill davidsen) writes:
>
>  Since compressed data works as well on CD-ROM as anywhere else, I
>think there's room for lots of stuff on that platter, maybe the
>source.misc and source.unix, or simtel20 unix collection, etc. Ithink
>all of Linux stuff in a good compressed format would fit in 50MB,
>leaving lots of room for other stuff.

	Actually, not all of it would have to be compressed.  The GNU/X disc
with the sparc binaries has the gnu software uncompressed, and unpacked with
each program in it's own subdirectory.  This makes browsing really easy.

	It would also be possible to have a image of a working linux system on
the disc.  A new user could install the root diskette, add the CDROM to the
PATH and MANPATH, and would be up and running without much pain. Symbolic links
could be added as required to point to things like the emacs .elc files and
sharable libraries.  You could even have a dedicated CDROM since the going
price is about 200$ (a 600Mb hard drive is currently >1000$).  I doubt that
this is a really good idea in the long run, because CDROM drives are slower
than a regular hard disk, but it would be an easy way to get someone on their
feet.

-Eric
--
Eric Youngdale
eric@tantalus.nrl.navy.mil

Newsgroups: comp.os.linux
Path: sparky!uunet!van-bc!bhenning
From: bhenn...@wimsey.bc.ca (Bill Henning)
Subject: Re: Linux CDROM (Was stabilizing Linux)
Organization: Wimsey 
Date: Wed, 12 Aug 1992 02:02:46 GMT
Message-ID: <1992Aug12.020246.22166@wimsey.bc.ca>
References: <3274@ra.nrl.navy.mil> <1992Aug11.004325.9409@unislc.uucp> 
<3284@ra.nrl.navy.mil>
Lines: 6

3.75Gb/process would be great! I am not likely to need that ofcourse, nor will
I likely have a swap partition much greater than 10-20Mb, but fewer limitations
are allways welcome.

Now if the number of processes are also increased from 64 to say 1024 that would
be great! (yes, I can see running out of 64 processes)

Newsgroups: comp.os.linux
From: tep@engr.uark.edu (Tim Peoples)
Subject: Re: Stabilizing Linux
Nntp-Posting-Host: engr.uark.edu
Reply-To: tep@engr.uark.edu
Organization: University of Arkansas
Organiztion: University of Arkansas, Dept. of Computer Systems Engineering
Date: Wed, 12 Aug 1992 12:47:49 GMT

dje@sspiff.ampr.ab.ca (Doug Evans) writes:

>davidsen@ariel.crd.GE.COM (william E Davidsen) writes:
>>  Let me say a word about documentation. I ahve written a number of
>>things which have gone out through source groups, all the way back to
>>mod.sources. During that time I wrote a user's guide to zoo, the
>>archiver. The guide is about 35 pages long, and took almost as long to
>>write a a major program. I have had a total of two people drop me a line
>>to tell me they liked it, and few if any archive sites carry the manual
>>even though they have the software.
>>
>>  Given that documentation is less fun to write than software, brings no
>>recognition whatsoever, and takes longer page for page than code, you
>>need to find someone who really wants to write documentation, or it's
>>not going to happen. If there's someone who is a good writer and a
>>complete loss as a programmer, and who wants to really contribute to the
>>effort, that's the person who should do it. And all the postings in the
>>world about what we need won't help unless you can find someone to do
>>it.
>
>Why not try to use documentation that already exists (where possible, of
>course)? Why keep reinventing the wheel just for the sake of reinventing the
>wheel???
>
>For example, I can buy a whole suite of manuals on SVR4/386 from Prentice Hall
>for about $300. Yes yes, that's pretty steep, but what are the realistic
>alternatives?
>
>Suppose Linux was heading to where SVR4/386 already is: most of our manuals
>would already be written for us. And we would rarely have to cope with Linux
>quirks when porting the various applications. And we would have a roadmap of
>where we were going.
>
>I know we don't want to do this, but as a hacker, I often (though not always)
>like hacking on new things, not redoing existing things just so I can say
>"I did it my way".
>-- 
>Doug Evans               | "You're just supposed to sit here?"
>dje@sspiff.ampr.ab.ca    |               - Worf in a mud bath.

Plagiarism.

-- 

+--------------------------------------+-------------------------------------+
| Tim Peoples                          | The time has come the hacker said   |
| tep@engr.uark.edu                    | to talk of many things,             |
| Dept. of Computer Systems Engineering| of simms and sockets and semaphores |
| University of Arkansas, Fayetteville | of processes and pings....          |
+--------------------------------------+-------------------------------------+
|    I need no disclaimer; nobody listens to what I have to say anyway!!     |
+----------------------------------------------------------------------------+

From: eric@tantalus.dell.com (Eric Youngdale)
Newsgroups: comp.os.linux
Subject: Re: Stabilizing Linux
Date: 12 Aug 92 14:33:47 GMT
Organization: Naval Research Laboratory

In article <1992Aug12.124749.17219@engr.uark.edu> tep@engr.uark.edu writes:
>dje@sspiff.ampr.ab.ca (Doug Evans) writes:
>
>>davidsen@ariel.crd.GE.COM (william E Davidsen) writes:
>>For example, I can buy a whole suite of manuals on SVR4/386 from Prentice Hall
>>for about $300. Yes yes, that's pretty steep, but what are the realistic
>>alternatives?
>>
>>Suppose Linux was heading to where SVR4/386 already is: most of our manuals
>>would already be written for us. And we would rarely have to cope with Linux
>
>Plagiarism.

	No, you miss the point.  I think that the idea was that we could simply
tell people to go down to their local technical bookstore, and tell them to buy
the SVR4/386 manual set.  The users manual might be OK, but we need to remember
that we are using mostly GNU replacements, and this means that there are
additional switches and features which would not be in the SVR4 manuals.
The system administrators manual for SVR4 is useless as far as linux is
concerned. 

	There is a linux manpage project that someone is coordinating.  I do
not know the status of it, but perhaps someone involved would care to fill us
in on the status.  I would be interested in seeing a list of which utilities
they have man pages for.

	This could be the beginning of a linux manual.  My feeling is that
a collection of man pages is a very poor substitute for a real manual, but
the main advantage is that there is very little writing required to get
something publishable.

--
Eric Youngdale
eric@tantalus.nrl.navy.mil

Path: sparky!uunet!crdgw1!rdsunx.crd.ge.com!ariel!davidsen
From: david...@ariel.crd.GE.COM (william E Davidsen)
Newsgroups: comp.os.linux
Subject: Re: Linux CDROM (Was stabilizing Linux)
Message-ID: <1992Aug12.164546.13304@crd.ge.com>
Date: 12 Aug 92 16:45:46 GMT
References: <3274@ra.nrl.navy.mil> <1992Aug11.004325.9409@unislc.uucp> 
<3284@ra.nrl.navy.mil> <1992Aug12.020246.22166@wimsey.bc.ca>
Sender: use...@crd.ge.com (Required for NNTP)
Reply-To: david...@crd.ge.com (bill davidsen)
Organization: GE Corporate R&D Center, Schenectady NY
Lines: 16
Nntp-Posting-Host: ariel.crd.ge.com

In article <1992Aug12.020246.22...@wimsey.bc.ca>, bhenn...@wimsey.bc.ca 
(Bill Henning) writes:
| 3.75Gb/process would be great! I am not likely to need that ofcourse, nor will
| I likely have a swap partition much greater than 10-20Mb, but fewer limitations
| are allways welcome.
| 
| Now if the number of processes are also increased from 64 to say 1024 that would
| be great! (yes, I can see running out of 64 processes)

  I can see running out of 64 processes a lot faster than running out of
64MB address space. I certainly am not running that much memory and
swap. Actually, with 12MB I can compile the kernel while reading mail
and still not swap. I know, because I don't even have a swap area,
haven't run out of memory yet.
-- 
bill davidsen, GE Corp. R&D Center; Box 8; Schenectady NY 12345
    I admit that when I was in school I wrote COBOL. But I didn't compile.

Path: sparky!uunet!mcsun!news.funet.fi!hydra!klaava!torvalds
From: torva...@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.linux
Subject: Re: Linux CDROM (Was stabilizing Linux)
Message-ID: <1992Aug13.095529.18687@klaava.Helsinki.FI>
Date: 13 Aug 92 09:55:29 GMT
References: <3284@ra.nrl.navy.mil> <1992Aug12.020246.22166@wimsey.bc.ca> 
<1992Aug12.164546.13304@crd.ge.com>
Organization: University of Helsinki
Lines: 18

In article <1992Aug12.164546.13...@crd.ge.com> david...@crd.ge.com 
(bill davidsen) writes:
>In article <1992Aug12.020246.22...@wimsey.bc.ca>, bhenn...@wimsey.bc.ca 
(Bill Henning) writes:
>| 3.75Gb/process would be great! I am not likely to need that ofcourse, nor will
>| I likely have a swap partition much greater than 10-20Mb, but fewer limitations
>| are allways welcome.
>| 
>| Now if the number of processes are also increased from 64 to say 1024 that would
>| be great! (yes, I can see running out of 64 processes)
>
>  I can see running out of 64 processes a lot faster than running out of
>64MB address space. I certainly am not running that much memory and
>swap.

The things are related: the 64 process maximum will be gone the same day
the 64MB limit is gone.  It will just take a bit of coding on my part,
so don't expect it tomorrow.. 

		Linus

Newsgroups: comp.os.linux
Path: sparky!uunet!decwrl!world!dsb
From: d...@world.std.com (David Boyce)
Subject: Re: Stabilizing Linux
Message-ID: <Bt1u3u.3zv@world.std.com>
Organization: The World Public Access UNIX, Brookline, MA
References: <1992Aug6.125441.22427@klaava.Helsinki.FI>
Date: Sat, 15 Aug 1992 23:47:53 GMT
Lines: 118

In article <1992Aug6.125441.22...@klaava.Helsinki.FI> 
wirze...@klaava.Helsinki.FI (Lars Wirzenius) writes:
>What I propose is this (getting to the point already, are we?):
>We'll let the kernel evolve towards what we want 1.0 to be, at
>least featurewise. After we reach a point where all the features
>we want are present, we freeze that version of the kernel as 0.99,
>and refuse to put in new features (unless they are essential). 
>
>After the kernel has been tested thoroughly, for a few weeks or
>so, and all the major bugs have been fixed, we release it as 1.0. 
>Then we consider this as the baseline, and create one official
>release that is easy to install, easy to administer, contains
>everything necessary and a lot of unnecessary but probably widely
>useful things and package everything neatly. 
>
>This package is what should be called Linux, not just the kernel. 
>Compare with, say, SVR4: it is not just a kernel, but a whole
>bunch of software. Also the whole package is given its own
>version number, instead of using the kernel's version number as is
>currently done. 
>
>The whole package will then be put into all the major Linux ftp
>sites, and those sites are cleaned up so that there are no old
>releases that confuse people. This is then announced and
>advertised as the official released version of Linux which should
>be used by everybody but hackers and others who like to have
>problems. It is also advertised to remain that way for at least a
>few months...

Two prolific threads lately have been this "Stabilizing Linux"
series and the discussions of the coming commercial CDROM/floppy releases.
It seems to me that there's a natural way to put them together:
The upshot of Lars' article seems to be that "we need to create
a stable, reliable Linux 1.0 version" and thus that we (using the term
loosely, I haven't contributed much) should go through the usual
integrate/test/document/release process that most commercial software
goes through. While I agree with most of Lar's article and the followups,
my disagreement with the above is rooted in the fact that commercial
software developers go through the painful release process because
they have paying customers, who are understandably upset when bugs or
incompatibilities show up or when promised features don't. But linux
per se has no paying customers, just users.
Now, at the same time we're reading that there are for-profit
organizations getting ready to issue CDROM or floppy editions
of linux as soon as it stabilizes. While I'm not one of those
people that has a problem with this (I understand the GNU concept
etc.) I do have trouble understanding why the companies that propose
to have paying linux customers shouldn't be the ones to do the
integration and testing.
So this is my proposal: instead of doing the integration and testing
for (the entire system) Linux 1.0, we should spend the intervening time
between now and (kernel) 1.0 discussing and developing a document
in the spirit of SVID or POSIX*, i.e. something which says
"you can't call your product Linux unless it conforms to the
following specifications". This wouldn't need to be a big thing,
just an invocation of the appropriate POSIX etc. specs where
applicable, a description of filesystem layout and utility
features ("... the utility suite is that provided by FSF,
with the exception of the following which are in the BSD
distribution, and such-and-such special cases...").
Note that this document doesn't concern itself with versions;
it simply says that the following files will exist in the following
places with the following modes and will exhibit the following behavior....
Once we've issued release 1.0 of that document and Linus has
blessed version 1.0 of his kernel, we can sit back and wait for
the CDROM company(s) to put together a package that satisfies
this Linux Interface Document (LID). If the makers of mcc-interim,
MJ, etc. can do this without a spec in their spare time, I assume
it can be done commercially without too much trouble, given the spec.
Actually, this document or something like it is probably already
being worked on by the linux-standards people.
And then as soon as the CDROM is issued, the people at one of the
big linux ftp sites can acquire one (taking donations as appropriate,
I'm sure they'd be gladly given) and mount it
for ftp access (it will be freely distributable, after all).
Other sites can do the same or mirror it, and voila! there is
a stable Linux 1.0 available for ftp by the core linux community,
and hackers can go on issuing a release a day if they like.
And we'll let the for-profit issuers decide when releases 1.1, 2.0,
etc. are required. They can pick fixes from the stream and issue
them as floppy updates to their customers or make whole new releases.
As a customer, the ftp site(s) would get these updates and make
them available for ftp.
I don't think this would be an undue burden on linux sellers.
Anybody selling software has to test it first; they owe that to
their customers. And they'll probably decide to rebuild all the binaries
with the released version of gcc, just for the sake of sanity.
At least, that's what every commercial system vendor I've ever
worked for does. So given that they'll probably go through
a build/integrate/test/release process anyway, why not piggyback
off them? Let them do what they hope to be paid for and let
the linux developers go on being the geese that lay golden eggs.
Also, we need to recognize that whatever version is issued on
floppy/CDROM is going to become a de facto standard, just by the nature
of things. This is why I think that rather than issuing a standard
release and then hoping it's what gets shipped on CDROM, we should
issue only a document and wait for it to be instantiated in CD.

Anyway, this seems, to me at least, to solve everyone's problems:
the hackers can go on hacking and acquiring new software as quickly
as they can ftp it, the "users" buy the CDROM or floppies. The
"Keep Linux FREE!!!!!" contingent will sleep better at night knowing
those commercial vendors are doing some work for their money,
the vendors get exclusive access to the non-internet community,
and those who have ftp access get the best of both worlds
by being able to ftp the CDROM bits for "free". And those of
us on c.o.l who have opinions galore but lack either the time
or the talent to contribute anything else, can get to work
on wrangling over the LID.

To summarize: issuing releases is an incredible drag. Especially
the ones after the first. The problems are caused by the requirements
of paying customers. Thus, I think the burden is best left to
those who have said customers. Let them also take charge of when new
releases are required, release nomenclature, packaging, etc.
Since Linux is freely distributable, we can "steal" their work
right backs for our purposes.
-- 
David Boyce d...@world.std.com 617-576-1540

Newsgroups: comp.os.linux
Path: sparky!uunet!munnari.oz.au!cs.mu.OZ.AU!danielce
From: danie...@mullian.ee.mu.OZ.AU (Daniel AMP Carosone)
Subject: Linux Standards (was: Stabilizing Linux)
Message-ID: <danielce.713926038@munagin>
Followup-To: gnu.misc.discuss
Sender: n...@cs.mu.OZ.AU
Organization: Computer Science, University of Melbourne, Australia
References: <1992Aug6.125441.22427@klaava.Helsinki.FI> <Bt1u3u.3zv@world.std.com>
Date: Sun, 16 Aug 1992 00:47:18 GMT
Lines: 84

+------(David Boyce)----------------------------------------
| 
| Anyway, this seems, to me at least, to solve everyone's problems:
| the hackers can go on hacking and acquiring new software as quickly
| as they can ftp it, the "users" buy the CDROM or floppies. The
| "Keep Linux FREE!!!!!" contingent will sleep better at night knowing
| those commercial vendors are doing some work for their money,
| the vendors get exclusive access to the non-internet community,
| and those who have ftp access get the best of both worlds
| by being able to ftp the CDROM bits for "free". And those of
| us on c.o.l who have opinions galore but lack either the time
| or the talent to contribute anything else, can get to work
| on wrangling over the LID.
|

Thankyou. This is an excellent suggestion. And well put.

Even ignoring the factions and parties building releases, it is an
excellent idea to have a standards document to which releases must
conform rather than a release to which the standard must conform. I
have not been following the discussions on the Linux Standards list, I
wonder if the current efforts there are this ambitious? This is surely
the best place to direct followup conversation on this matter.


+------(David Boyce)----------------------------------------
| 
| To summarize: issuing releases is an incredible drag. Especially
| the ones after the first. The problems are caused by the requirements
| of paying customers. Thus, I think the burden is best left to
| those who have said customers. Let them also take charge of when new
| releases are required, release nomenclature, packaging, etc.
| Since Linux is freely distributable, we can "steal" their work
| right backs for our purposes.
|

Cygnus seems to be doing quite well. And the world is doing quite well
out of Cygnus, in pretty much exactly the way you describe above.
Anyone who has built gcc, gdb, libg++, or any of the number of other
GNU programs that come with the Cygnus `configure' script can attest
to this (especially if you are on a known system). 

All of this is not only within the limits of what is allowed by the
GNU Copyleft -- it is sound proof of one of the basic tenets of the
philosophy behind it: You don't need to force people into restrictions
on how they use software (or any information) in order to make money
from it. 

What many people miss is that GNU is an alternative economy for the
software industry, not an alternative to the software industry.

I've actually been wondering whether or when Cygnus will pick up
Linux, perhaps they're working on it, perhaps they are waiting for
some more development, quite possibly they are too busy with the
wealth of other Free Software. They are certainly aware of Linux, and
must be considering taking it up. Commercially-minded people take
note.

Followups on this matter, as always, to gnu.misc.discuss. The headers
for this article point there.


One other point: In all this discussion, we have been referring to the
aims and rules of the GNU copyleft. While the license conditions for
Linux are the same as for the GNU copyleft, they have not always been.
The original license conditions forbade any money to change hands over
Linux. (perhaps in those early days Linus thought it wasn't worth the
money? :-) And copyright is held by Linus, not by the FSF. We should
abide by the aims of The Grand Wizard Torvalds, out of respect and
gratitute if for none of the other good reasons.

May one presume that if Linus has any objections to what someone is
doing with his work (and, by proxy, the work others have contributed)
he will make them known and clear? If he does have some objection, and
that is not heeded, he is free to change the terms of the License for
later releases if he deems it necessary.

Followups on this issue are pointless, unless Linus wishes to make
some statement.
_______________________________________________________________________________
Daniel AMP Carosone. email: danie...@ee.mu.oz.au snail: 37 Wandin Road
Computer/Software Eng, IRC: Waftam Camberwell 3124
University of Melbourne. Vox: +61 3 882 8910 Australia

From: ericy@hal.gnu.ai.mit.edu (Eric Youngdale)
Newsgroups: comp.os.linux
Subject: Re: Stabilizing Linux
Date: 16 Aug 92 01:46:47 GMT
Organization: /etc/organization

In article < Bt1u3u.3zv@world.std.com> dsb@world.std.com (David Boyce) writes:
>To summarize: issuing releases is an incredible drag. Especially
>the ones after the first. The problems are caused by the requirements
>of paying customers. Thus, I think the burden is best left to
>those who have said customers. Let them also take charge of when new
>releases are required, release nomenclature, packaging, etc.
>Since Linux is freely distributable, we can "steal" their work
>right backs for our purposes.

	I think you may misunderstand the market.  The "paying customers"
mainly want a source for linux that does not depend upon network access or a
modem.  My sense is that the CDROM manufacturers (at least the ones that I have
been in contact with) are not interested in a lot of release engineering.  I
gather that they will take a snapshot of the tsx archives, and combine this
with a snapshot of a working linux system, put that on a disc and call that
Linux.  They are not under any obligation to do more than this.  (The disc will
also include a complete set of gnu sources, in case anyone is wondering).

	It also comes down to pricing.  If the disc is very inexpensive, then
there is no reason to expect them to do very much.  In fact, the Walnut Creek
GNU/X11 with sparc binaries CDROM has a list price of $39.95.  The disc with
the simtel archives has a list price of $24.95, and they have several others in
this price range.  (I have also seen these disks discounted at computer flea
markets by about 15%).  I do not know what their pricing plans for the
linux/386bsd/gnu disc are, but it would not surprise me if the prices for that
CDROM were comparable to the others.

	Theoretically, someone may do the release engineering, and then
try and charge 500$ for the disc.  They would certainly be allowed to under
the GPL, but they would have to compete with the 30$ disk.  I would not want to
hold my breath waiting for someone to try this.

-Eric

Path: sparky!uunet!mcsun!news.funet.fi!hydra!klaava!torvalds
From: torva...@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.linux
Subject: Re: Linux Standards (was: Stabilizing Linux)
Message-ID: <1992Aug16.073340.11418@klaava.Helsinki.FI>
Date: 16 Aug 92 07:33:40 GMT
References: <1992Aug6.125441.22427@klaava.Helsinki.FI> 
<Bt1u3u.3zv@world.std.com> <danielce.713926038@munagin>
Organization: University of Helsinki
Lines: 43

In article <danielce.713926038@munagin> danie...@mullian.ee.mu.OZ.AU 
(Daniel AMP Carosone) writes:
>
>May one presume that if Linus has any objections to what someone is
>doing with his work (and, by proxy, the work others have contributed)
>he will make them known and clear? If he does have some objection, and
>that is not heeded, he is free to change the terms of the License for
>later releases if he deems it necessary.

Well, I've answered this by email and earlier in the newsgroup, but I
guess it won't hurt to say it once more: I have no objection whatsoever
to any commercial use of linux that abides by the copyright. Not only
because I wouldn't have a legal leg to stand on, but simply because
there isn't any point in it. I'm not making any money off linux, so I
cannot lose anything by letting others do it - it's not as if they were
competing in the same market-place. 

Also, if people sell linux, it certainly won't hurt the "free" status of
linux: it won't make all the free releases go away. So there isn't
really anything to get excited about - a commersial linux won't hurt the
linux users in the slightest, and might make linux available to people
that otherwise didn't have the possibility of getting it. 

The earliest versions of linux had a more restrictive copyright: any
commercial activity was prohibited by it. That was mostly due to (a) an
overreaction to the price I had to pay for minix ($169 may not be much,
but it's still more than I can afford: I'm still paying monthly
installments on my machine) and (b) protection: linux wasn't well-known
then, and I didn't think it was ready for commercial use anyway. 

(a) is silly, (b) went away with 0.12 - the copyright essentially
changed when I got the first query about selling linux (with just a
small delay to make sure there were no objections from people that had
made patches available. There weren't). 

And as to the price: it doesn't really matter. If somebody wants to
make linux availabe for $ 995.95 ("special price just for you, amigo"),
I'd certainly be interested to hear how well it sells, but it won't
bother me. And bickering over whether $60 is too much is silly: people
buy it if they feel it's worth it. Actually, a nicely priced product
may sell better than a cheap one: it's illogical, but some people feel
that a product cannot be very good if it's cheap.

Linus

From: tytso@ATHENA.MIT.EDU (Theodore Ts'o)
Subject: Re: Linux Standards (was: Stabilizing Linux)
Reply-To: tytso@ATHENA.MIT.EDU (Theodore Ts'o)
Date: Sun, 16 Aug 1992 22:17:36 GMT

   From: danielce@mullian.ee.mu.OZ.AU (Daniel AMP Carosone)
   Date: Sun, 16 Aug 1992 00:47:18 GMT

   Even ignoring the factions and parties building releases, it is an
   excellent idea to have a standards document to which releases must
   conform rather than a release to which the standard must conform.  

I disagree.  Most usuable standards in the world come about by adopting
an already working implementation and declaring it to be a standard.
Most unworkable standards in the world happen because they were designed
by a committee, which typically consist of pompous people who just sit
around a table, and who are not, generally, the people who would
actually be doing the implementation.

If you want an example of this, just look at the Internet --- the
Internet "standards" were first done by having working implementations,
which were then annointed as the standard.  In contrast, you have the
OSI standards --- which were designed by committee without having any
implementation experience --- and what you end up with is a disaster.

   I have not been following the discussions on the Linux Standards list, I
   wonder if the current efforts there are this ambitious? This is surely
   the best place to direct followup conversation on this matter.

There hasn't been any discussions on the Linux Standards list, in quite
a while.  We just couldn't come to a consensus.  One big problem is that
since *anybody* can join the list, anyone can start blabbing off with
half-*ssed ideas that really won't work --- and it takes an awful lot of
to respond to each of them explaining why the idea is stupid or won't
work, or is completely non-standard compared to every single other Unix
system in the world.  And we were only trying to discuss something as
simple as what to name the devices in /dev!  I saw a bunch of proposals
on that mailing list, for example, about hiearchical directories in /dev
(i.e., /dev/tty/*, /dev/pty/*), and the authors seemed blithely unaware
of much havoc that would wreck amongst unsuspecting programs.  At the
time, I was spending all of my time at a *real* standards meeting (the
Internet Engineering Task Force), so I simply did not have the time and
energy to respond.  And, it turns out, neither did anyone else, as the
list went dead shortly thereafter.

There are two advantages that a "real" standards body (such as the IETF,
or ANSI, or ISO) has over something that we put together.  First of all,
each of those organizations have authority based upon some charter.  Who
would charter a "Linux Standards Committee"?  What would give that group
authority?  Aside from sounding pretentious as all hell, there is
certainly no way you could enforce the statement "You are not allowed to
use the name Linux unless you follow thus-and-so".

The second advantage that a "real" standards body as over something we
put together is that there is a real cost to attend a standards meeting,
even if it's only the travel expenses to the location in question.  This
tends to weed out the duffers and the casual attendees.  While it is
true that this may filter out some brilliant, poor graduate student ---
it also filters out the raving Usenet flamers as well.  You either need
to be rich enough to attend one of these meetings, or your company has
to think well enough about you to let you attend.  While this mode of
operations has its drawbacks, it has its clear advantages, and
unfortunately it's one that could not apply to us.

However, without this, you have to face the fact that you will have a
non-trivial amount of people with relatively little Unix system
experience that will be constantly spouting off and you either have to
outright ignore them --- which doesn't tend to go over well if you want
to at least have the pretense of democracy --- or you have to spend a
lot of time and energy teaching them why they are broken.


The advantage of having multiple releases is that even IF one of
"self-appointed release engineers" is totally losing and is brain
damaged ---- and it turns out to be rarely true, since the people doing
the work tend to be much more reasonable than the airchair quarterbacks
--- that's O.K., none of them is the standard.  The rest of the world
can just ignore that release once it is determined that the person has
used a completely hairbrained filesystem structure, for example.  And,
if a significant minority *want* that hairbrained strucutre, they can
use that release; it's their freedom of choice.

On the other hand, even if we did annoint a single standards document as
The Only Way To Do Linux ---- and this is assuming that (1) we had the
authority to do this and (2) we could actually come to a consensus ---
there is a significant chance that the standard may be broken, in which
case everybody loses.  How could this happen, if we had come to a
consensus, I hear you ask?  Because all it takes is a vocal minority to
continually push their *bad* ideas, and what will happen is that the
reasonable people will, after a while, give up and go away in disgust.
At the very least, I would be very resentful of the fact that I had to
spend all of my time fighting bad ideas instead of doing something
useful, like doing some Linux development.  I'd rather the bad ideas die
in the marketplace.

                                                - Ted

From: pmacdona@sanjuan (Peter MacDonald)
Subject: Re: Linux Standards (was: Stabilizing Linux)
Date: Sun, 16 Aug 92 23:49:35 GMT

In article <1992Aug16.221736.9732@athena.mit.edu> 
tytso@ATHENA.MIT.EDU (Theodore Ts'o) writes:
>
>   From: danielce@mullian.ee.mu.OZ.AU (Daniel AMP Carosone)
>   Date: Sun, 16 Aug 1992 00:47:18 GMT
>
>   Even ignoring the factions and parties building releases, it is an
>   excellent idea to have a standards document to which releases must
>   conform rather than a release to which the standard must conform.  
>
>I disagree.  Most usuable standards in the world come about by adopting
>an already working implementation and declaring it to be a standard.
>Most unworkable standards in the world happen because they were designed
>by a committee, which typically consist of pompous people who just sit
>around a table, and who are not, generally, the people who would
>actually be doing the implementation.
>
>If you want an example of this, just look at the Internet --- the
>Internet "standards" were first done by having working implementations,
>which were then annointed as the standard.  In contrast, you have the
>OSI standards --- which were designed by committee without having any
>implementation experience --- and what you end up with is a disaster.


Oh yes, and don't forget X400.

...
>
>However, without this, you have to face the fact that you will have a
>non-trivial amount of people with relatively little Unix system
>experience that will be constantly spouting off and you either have to
>outright ignore them --- which doesn't tend to go over well if you want
>to at least have the pretense of democracy --- or you have to spend a
>lot of time and energy teaching them why they are broken.

I've got a great idea.  Lets develop a modified mailing list that is split
into 10 subgroups: linux-activists0 through linux-activists9.  Each person 
could subscribe to any *one* of them that they like, but here is the catch.
Each subscriber would be graded, between 0 and 9, after subscription, based
upon the amount they have contributed to the Linux effort.  And (Oh, I love 
this part), we could have a committee that would decide and revise that grade, 
ya.  Linus, I assume, would get an automatic 9.   But maybe the committee 
would decide that that was inappropriate. ;-)

Now, whatever level a user subscribed to, he would only receive postings from
users >= that grade.  ie. subscribing to linux-activists1, would automatically
filter out all the zero's (and are there ever a chestfull of them out there
to filter).  

>
>                                               - Ted

Peter.

Newsgroups: comp.os.linux
Path: sparky!uunet!munnari.oz.au!trlluna!titan!medici!mcf
From: m...@medici.trl.OZ.AU (Michael Flower)
Subject: Re: Linux Standards (was: Stabilizing Linux)
Message-ID: <1992Aug18.070709.16015@trl.oz.au>
Sender: r...@trl.oz.au (System PRIVILEGED Account)
Organization: Telecom Research Labs, Melbourne, Australia
References: <1992Aug16.073340.11418@klaava.Helsinki.FI>
Date: Tue, 18 Aug 1992 07:07:09 GMT
Lines: 20

From article <1992Aug16.073340.11...@klaava.Helsinki.FI>, 
by torva...@klaava.Helsinki.FI (Linus Benedict Torvalds):

> ......
> overreaction to the price I had to pay for minix ($169 may not be much,
> but it's still more than I can afford: I'm still paying monthly
> installments on my machine) and (b) protection: linux wasn't well-known
> .....

Ok guys, what say someone organises a whip around to pay off this beast.
It seems to me that there are an awful lot of people around that are
using Linux, and getting a lot of fun and interest out of it, and for a
small trouble we could pay off the machine, and allow Linus's mind
fuller scope to spend on the problem in hand rather than the monthly payments
on the box. Perhaps as a gesture of saying 'thanx'. I realise that there are many
other people that also contribute, but after all, Linus started this game.

Michael Flower
Artificial Intelligence Systems Email: m.flo...@trl.oz.au
Telecom Research Laboratories Voice: +61 3 541 6179
Melbourne, AUSTRALIA Fax: +61 3 543 8863

Newsgroups: comp.os.linux
Path: sparky!uunet!charon.amdahl.com!pacbell.com!mips!sdd.hp.com!
zaphod.mps.ohio-state.edu!news.acns.nwu.edu!casbah.acns.nwu.edu!hpa
From: h...@casbah.acns.nwu.edu (H. Peter Anvin N9ITP)
Subject: Re: Paying off Linus' PC (was: Linux Standards)
Message-ID: <1992Aug19.162616.3357@news.acns.nwu.edu>
Sender: use...@news.acns.nwu.edu (Usenet on news.acns)
Reply-To: h...@nwu.edu (H. Peter Anvin)
Organization: You must be kidding!
References: <1992Aug16.073340.11418@klaava.Helsinki.FI> 
<1992Aug18.070709.16015@trl.oz.au>
Date: Wed, 19 Aug 1992 16:26:16 GMT
Lines: 49

In article <1992Aug18.070709.16...@trl.oz.au> of comp.os.linux,
m...@medici.trl.OZ.AU (Michael Flower) writes:
> From article <1992Aug16.073340.11...@klaava.Helsinki.FI>, 
by torva...@klaava.Helsinki.FI (Linus Benedict Torvalds):
> 
> > ......
> > overreaction to the price I had to pay for minix ($169 may not be much,
> > but it's still more than I can afford: I'm still paying monthly
> > installments on my machine) and (b) protection: linux wasn't well-known
> > .....
> 
> Ok guys, what say someone organises a whip around to pay off this beast.
> It seems to me that there are an awful lot of people around that are
> using Linux, and getting a lot of fun and interest out of it, and for a
> small trouble we could pay off the machine, and allow Linus's mind
> fuller scope to spend on the problem in hand rather than the monthly payments
> on the box. Perhaps as a gesture of saying 'thanx'. I realise that there
> are many 
> other people that also contribute, but after all, Linus started this game.
> 

Okay people, don't you think this is fair?

Since I have fixed my address problem, I hereby volunteer to collect money
from Linuxers in the USA. Send checks to:

Linus collection
c/o Peter Anvin
EECS department
Tech Institute rm 2657
2145 Sheridan Road
Evanston, IL 60208-3118

Please make checks out to me (Peter Anvin) so I can transfer them. I
*will* transfer all money I get, minus bank transfer charges, to Linus, no
matter how much or how little I get. If there is anything over when he has
paid off his 'puter I suggest he buys himself some nice hardware to soup up
his system, or just have some fun.

Also, please put your e-mail address on the checks so I can send you a
confirmation. Once I have sent the money I'll post a breakdown here so
everyone can check that I have sent the right amount.

/hpa

-- 
INTERNET: h...@nwu.edu TALK: h...@casbah.acns.nwu.edu
BITNET: HPA@NUACC IBMNET: 16334@IBMX400
HAM RADIO: N9ITP NeXTMAIL: h...@lenny.acns.nwu.edu
This is a test of the emergency USENET system.

Path: sparky!uunet!mcsun!news.funet.fi!hydra!klaava!torvalds
From: torva...@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.linux
Subject: Re: Paying off Linus' PC (was: Linux Standards)
Message-ID: <1992Aug21.065950.8463@klaava.Helsinki.FI>
Date: 21 Aug 92 06:59:50 GMT
References: <1992Aug16.073340.11418@klaava.Helsinki.FI> 
<1992Aug18.070709.16015@trl.oz.au> <1992Aug19.162616.3357@news.acns.nwu.edu>
Organization: University of Helsinki
Lines: 39

I hate to follow up to a thread that might actually be profitable for
me, but I felt I had to clarify a few points - especially if there are
new linux users out there reading the thread. 

If people feel they want to send me money (indirectly or directly) as a
token of appreciation, that's very much ok by me (surprise, surprise),
but a token of appreciation is all it is going to be. Yes, I'll be able
to pay off my machine more quickly or even get a bigger harddisk or
whatever, but sending me money won't get you any better service - this
is definitely not a "registration fee" or anything like that. 

The above just means that (a) even if you don't send any money I won't
mind in the least that you use linux, and when I answer questions etc I
won't check if you sent me money first. And (b) even if you sent me
money, any features you propose/want will get no extra priority. In
fact, trying to make me feel guilty over money ("after all, I paid you
$20 for this") is likely to get the exact opposite reaction.

Finally, I won't give any guarantees of what the money will be used for:
if you add some kind of message giving preferences ("use it to pay off
your computer"), I'll naturally follow them within reasonable limits,
but I might just use them to pay off my "beer&pizza"-debts (*), which
might be against your religion or whatever. 

So, the result of all this? If somebody thought I was despairing about
my monthly payments, that's not true, and frankly, I'll get the computer
paid off even if nobody sends me a cent, even if it might take me a bit
longer. Also, there are others that have contributed to linux, and I
won't give them anything (not because I'm a selfish bastard, but simply
due to practical reasons).

If any of the above reasons made you decide I don't really need the
money, I just ask you not to mail me about it. I /don't/ want to know
about any money I might have gotten, but didn't.

Linus

(*) Yes, beer is reasonably costly over here in Finland, but no, my
debts aren't really that big. Quite small, in fact, considering..

Newsgroups: comp.os.linux
Path: sparky!uunet!haven.umd.edu!darwin.sura.net!zaphod.mps.ohio-state.edu!
news.acns.nwu.edu!casbah.acns.nwu.edu!hpa
From: h...@casbah.acns.nwu.edu (H. Peter Anvin N9ITP)
Subject: Re: Paying off Linus' PC (was: Linux Standards)
Message-ID: <1992Aug21.164837.25745@news.acns.nwu.edu>
Sender: use...@news.acns.nwu.edu (Usenet on news.acns)
Reply-To: h...@nwu.edu (H. Peter Anvin)
Organization: You must be kidding!
References: <1992Aug18.070709.16015@trl.oz.au> 
<1992Aug19.162616.3357@news.acns.nwu.edu> <1992Aug21.065950.8463@klaava.Helsinki.FI>
Date: Fri, 21 Aug 1992 16:48:37 GMT
Lines: 40

In article <1992Aug21.065950.8...@klaava.Helsinki.FI> of comp.os.linux,
torva...@klaava.Helsinki.FI (Linus Benedict Torvalds) writes:
> I hate to follow up to a thread that might actually be profitable for
> me, but I felt I had to clarify a few points - especially if there are
> new linux users out there reading the thread. 
> 
> If people feel they want to send me money (indirectly or directly) as a
> token of appreciation, that's very much ok by me (surprise, surprise),
> but a token of appreciation is all it is going to be. Yes, I'll be able
> to pay off my machine more quickly or even get a bigger harddisk or
> whatever, but sending me money won't get you any better service - this
> is definitely not a "registration fee" or anything like that. 
> 
> The above just means that (a) even if you don't send any money I won't
> mind in the least that you use linux, and when I answer questions etc I
> won't check if you sent me money first. And (b) even if you sent me
> money, any features you propose/want will get no extra priority. In
> fact, trying to make me feel guilty over money ("after all, I paid you
> $20 for this") is likely to get the exact opposite reaction.
> 
> Finally, I won't give any guarantees of what the money will be used for:
> if you add some kind of message giving preferences ("use it to pay off
> your computer"), I'll naturally follow them within reasonable limits,
> but I might just use them to pay off my "beer&pizza"-debts (*), which
> might be against your religion or whatever. 

I am *glad* to hear this... and as I said before, I am *not* going to ask
*at all* what the money is being used for. Still, being a student and
being from Scandinavia I know it is not that easy all the time to stay
liquid. In short, DON'T SEND MONEY IF YOU EXPECT ANYTHING IN RETURN. I
would refuse to have anything to do with this if Linus were to start
treating Linux like shareware.

/hpa

-- 
INTERNET: h...@nwu.edu TALK: h...@casbah.acns.nwu.edu
BITNET: HPA@NUACC IBMNET: 16334@IBMX400
HAM RADIO: N9ITP NeXTMAIL: h...@lenny.acns.nwu.edu
while ( 1 ) ; cp /dev/zero /dev/null & end

			        About USENET

USENET (Users’ Network) was a bulletin board shared among many computer
systems around the world. USENET was a logical network, sitting on top
of several physical networks, among them UUCP, BLICN, BERKNET, X.25, and
the ARPANET. Sites on USENET included many universities, private companies
and research organizations. See USENET Archives.

		       SCO Files Lawsuit Against IBM

March 7, 2003 - The SCO Group filed legal action against IBM in the State 
Court of Utah for trade secrets misappropriation, tortious interference, 
unfair competition and breach of contract. The complaint alleges that IBM 
made concentrated efforts to improperly destroy the economic value of 
UNIX, particularly UNIX on Intel, to benefit IBM's Linux services 
business. See SCO v IBM.

The materials and information included in this website may only be used
for purposes such as criticism, review, private study, scholarship, or
research.

Electronic mail:			       WorldWideWeb:
   tech-insider@outlook.com			  http://tech-insider.org/