Date: Sat, 4 Aug 90 08:47:01 -0400
From: mlitt...@breeze.bellcore.com (Michael L. Littman)
Subject: Risks of de facto standards

Richard Stallman of the Free Software Foundation (the GNU folks) recently
announced on the gnu.announce mailing list that the Lempel-Ziv-Welch algorithm
on which "compress", "uncompress", and "zcat" may be covered by a patent
assigned to Unisys.  Unisys claims that people should not be running these
programs without their permission.

Since "compress" is the de facto standard method for moving big files across
the net cheaply (and I believe for high speed modems as well), this could
create some serious problems.  For one thing, if people stop using compress it
could really put a strain on the network.

On the other hand, sending people compressed files could put them in the
potentially legally precarious position of running uncompress without
permission.  This is the position the GNU folks seem to be taking.  They will
either find a new data compression algorithm or send around uncompressed files
(as soon as they can find the disk space to store the uncompressed tar files!)

If the POSIX committee is not able to license the algorithm, they will drop
these utilities from the next draft of the standards (according to a draft of
the POSIX user portability extension standard P1003.2a).

Depending on your political orientation, this can be viewed as a RISK of
software patents or a RISK of dependence on a de facto standard.  In either
case, life may be a little tougher without "compress".

Michael L. Littman [MRE 2L-331 x-5155]   mlitt...@breeze.bellcore.com

Date: Sun, 12 Aug 90 13:21:35 EDT
From: cos...@BBN.COM
Subject: Re: Risks of de facto standards

}Since "compress" is the de facto standard method for moving big files across
}the net cheaply ...
} They will
}either find a new data compression algorithm or send around uncompressed files
}(as soon as they can find the disk space to store the uncompressed tar files!)

Of course, sending around uncompressed files is unbelievably idiotic.  That
WOULD be consistent with the general FSF philosphy, which is to apparently
avoid innovation at all costs and restrict their activities to implementing
other people's ideas.

Plain and simple, there are zillions of compression schemes about.  'compress'
is hardly the best of them, although it is quite good.  Its major advantage and
popularity is more accidental than that it depends on any real technical
necessity [a questionably-public-domain implementation 'made the rounds', and
it *IS* better than the adaptive Huffman coding compression, which is what was
previously being used.  And so it kind of 'snuck in'.  Few people using
compress have any intellectual or technical investment in it: in fact, few have
any clue what the algorithm even IS: if it were changed to something else
tomorrow almost no one would know or care.

  /Bernie\

Date: Wed, 15 Aug 90 13:01:51 EDT
From: david...@crdos1.crd.ge.com
Subject: Re: Risks of de facto standards

  If the algorithm on compress were changed tomorrow, every person who ever
used the old one would be unable to recover the data from the compressed form.
I think that's a far cry from "almost no one would know or care."

  More improtant, the performance of compress (bytes/cpu-sec) is very good
compared to the other available programs. I ran a test on this (for other
reasons), and found that compress is a factor of four faster (CPU) than any of
the other compressors. It is not by any stretch the best in terms of
compression, but an increase that large in time to compress news batches would
make news impractical on many machines.

  Here's a subset of the test results, for a typical news batch (text).
Times are in sec, measured by the kernel, on a 25MHz 386 running V.3.2.
Note that the size for the archivers includes a directory.

			CPU		final		COMMENTS
Program			sec		size	(original 56718 bytes)

compress		0.78		25486
zoo			1.96		28178	archiver
arc			2.84		29284	archiver (w/ "squash")
zip v1.02		3.76		21031	archiver, run under MSDOS
lharc v2 (beta)		6.93		20602	archiver, run under MSDOS
lharc v1		7.12		22952	archiver
lzhuf			7.64		22918


  Hope that sheds some light on the discussion. There does not seem to
be anything as fast currently available (to me).

bill davidsen	(david...@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)

Date: Wed, 15 Aug 90 09:24:15 EDT
From: SILL D E <d...@stc06.ctd.ornl.gov>
Subject: Re: Risks of de facto standards

In fact, the FSF's raison d'etre is to encourage innovation by making it
unnecessary for programmers to write code that's already been written.  The GNU
project is in a drudgery phase right now since they *are* having to rewrite
much existing code.  At least these programs are being improved as they're
being rewritten.  GNU Tar, for example, does incremental backups.  Their most
successful product, GNU Emacs, was the original idea of the FSF's founder,
Richard Stallman.

>Few people using
>compress have any intellectual or technical investment in it: in fact, few have
>any clue what the algorithm even IS: if it were changed to something else
>tomorrow almost no one would know or care.

Not true.  Although the LZW compression algorithm is transparent to users of
compress, as it should be, files compressed using it couldn't be uncompressed
by a replacement program.  The existing base of compressed files in public
archives and private systems combined with the nearly ubiquitous presence of
compress, uncompress, and zcat on today's UNIX systems would make a switch to
an alternative method far from easy, fast, or transparent.

Dave Sill (d...@ornl.gov)		These are my opinions.
Martin Marietta Energy Systems, Workstation Support

   [Also commented upon Jay Plett <j...@silence.princeton.nj.us>]