Path: gmdzi!unido!mcsun!uunet!usenix!std-unix
From: j...@usenix.org
Newsgroups: comp.std.unix
Subject: Standards Update, Recent Standards Activities
Message-ID: <387@usenix.ORG>
Date: 30 Jun 90 02:42:08 GMT
Sender: std-u...@usenix.ORG
Reply-To: std-u...@uunet.uu.net
Lines: 683
Approved: jsq@usenix (Moderator, John Quarterman)
Posted: Sat Jun 30 03:42:08 1990

From: <j...@usenix.org>


           An Update on UNIX*-Related Standards Activities

                              June, 1990

                 USENIX Standards Watchdog Committee

          Jeffrey S. Haemer, j...@ico.isc.com, Report Editor

Recent Standards Activities

This editorial is an overview of some of the spring-quarter standards
activities covered by the USENIX Standards Watchdog Committee.  A
companion article provides a general overview of the committee itself.

In this article, I've emphasized non-technical issues, which are
unlikely to appear in official minutes and mailings of the standards
committees.  Previously published articles give more detailed, more
technical views on most of these groups' activities.  If my comments
move you to read one of those earlier reports that you wouldn't have
read otherwise, I've served my purpose.  Of course, on reading that
report you may discover the watchdog's opinion differs completely from
mine.

SEC: Standard/Sponsor Executive Committee

The biggest hullabaloo in the POSIX world this quarter came out of the
SEC, the group that approves creation of new committees.  At the April
meeting, in a move to slow the uncontrolled proliferation of POSIX
standards, the institutional representatives (IRs) (one each from
Usenix, UniForum, X/Open, OSF, and UI) recommended two changes in the
Project Authorization Request (PAR) approval process: (1) firm
criteria for PAR approval and group persistence and (2) a PAR-approval
group that had no working-group chairs or co-chairs.  Dale Harris, of
IBM Austin, presented the proposal and immediately took a lot of heat
from the attendees, most of whom are working-group chairs and co-
chairs (Dale isn't an IR, but shared the concerns that motivated the
recommendations and asked to make the presentation.)

The chair, Jim Isaak, created an ad-hoc committee to talk over the
proposal in a less emotional atmosphere.  Consensus when the committee
met was that the problem of proliferating PARs was real, and the only
question was how to fix it.  The group put together a formal set of
criteria for PAR approval (which John Quarterman has posted to
comp.std.unix), which seems to have satisfied everyone on the SEC, and
passed without issue.  The criteria seem to have teeth: at least one
of the Project Authorization Requests presented later (1201.3, UIMS)

__________

  * UNIX is a Registered Trademark of UNIX System Laboratories in the
    United States and other countries.

June, 1990 Standards Update                Recent Standards Activities


				- 2 -

flunked the criteria and was rejected.  Two others (1201.1 and 1201.4
toolkits and Xlib) were deferred.  I suspect (though doubt that any
would admit it) that the proposals would have been submitted and
passed in the absence of the criteria.  In another related up-note,
Tim Baker and Jim Isaak drafted a letter to one group (1224, X.400
API), warning them that they must either prove they're working or
dissolve.

The second of the two suggestions, the creation of a PAR-approval
subcommittee, sank quietly.  The issue will stay submerged so long as
it looks like the SEC is actually using the approved criteria to fix
the problem.  [ Actually, this may not be true.  Watch for developments
at the next meeting, in Danvers, MA in mid-July.  -jsq]

Shane McCarron's column in the July Unix Review covers this area in
more detail.

1003.0: POSIX Guide

Those of you who have read my last two columns will know that I've
taken the position that dot zero is valuable, even if it doesn't get a
lot of measurable work done.  This time, I have to say it looks like
it's also making measurable progress, and may go to mock ballot by its
target of fourth quarter of this year.  To me, the most interesting
dot-zero-related items this quarter are the growing prominence of
profiles, and the mention of dot zero's work in the PAR-approval-
criteria passed by the SEC.

Al Hankinson, the chair, tells me that he thinks dot zero's biggest
contribution has been popularizing profiles -- basically,
application-area-specific lists of pointers to other standards.  This
organizing principle has been adopted not only by the SEC (several of
the POSIX groups are writing profiles), but by NIST (Al's from NIST)
and ISO.  I suspect a lot of other important organizations will fall
in line here.

Nestled among the other criteria for PAR approval, is a requirement
that PAR proposers write a sample description of their group for the
POSIX guide.  Someone questioned why proposers should have to do dot
zero's job for them.  The explanation comes in two pieces.  First, dot
zero doesn't have the resources to be an expert on everything, it has
its hands full just trying to create an overall architecture.  Second,
the proposers aren't supplying what will ultimately go into the POSIX
guide, they're supplying a sample.  The act of drafting that sample
will force each proposer to think hard about where the new group would
fit in the grand scheme, right from the start.  This should help
insure that the guide's architecture really does reflect the rest of
the POSIX effort, and will increase the interest of the other groups
in the details of the guide.

June, 1990 Standards Update                Recent Standards Activities


				- 3 -

1003.1: System services interface

Dot one, the only group that has completed a standard, is in the
throes of completing a second.  Not only has the IEEE updated the
existing standard -- the new version will be IEEE 1003.1-1990 -- ISO
appears on the verge of approving the new version as IS 9945-1.  The
major sticking points currently seem limited to things like format and
layout -- important in the bureaucratic world of international
standards, but inconsequential to the average user.  Speaking of
layout, one wonders whether the new edition and ISO versions will
retain the yellow-green cover that has given the current document its
common name -- the ugly green book.  (I've thought about soaking mine
in Aqua Velva so it can smell like Green Chartreuse, too.)

The interesting issues in the group are raised by the dot-one-b work,
which adds new functionality.  (Read Paul Rabin's snitch report for
the gory details.) The thorniest problem is the messaging work.
Messaging, here, means a mechanism for access to external text and is
unrelated to msgget(), msgop(), msgctl(), or any other message-passing
schemes.  The problem being addressed is how to move all printable
strings out of our programs and into external ``message'' files so
that we can change program output from, say, English to German by
changing an environmental variable.  Other dot-one-b topics, like
symbolic links, are interesting, but less pervasive.  This one will
change the way you write any commercial product that outputs text --
 anything that has printf() statements.

The group is in a quandary.  X/Open has a scheme that has gotten a
little use.  We're not talking three or four years of shake-out, here,
but enough use to lay a claim to the ``existing practice'' label.  On
the other hand, it isn't a very pleasant scheme, and you'd have no
problem coming up with candidate alternatives.  The UniForum
Internationalization Technical Committee presented one at the April
meeting.  It's rumored that X/Open itself may replace its current
scheme with another.  So, what to do?  Changing to a new scheme
ignores existing internationalized applications and codifies an
untried approach.  Blessing the current X/Open scheme freezes
evolution at this early stage and kills any motivation to develop an
easy-to-use alternative.  Not providing any standard makes
internationalized applications (in a couple of years this will mean
any non-throw-away program) non-portable, and requires that we
continue to have to make heavy source-code modifications on every
port -- just what POSIX is supposed to help us get around.

To help you think about the problem, here's the way you'll have to
write the "hello, world" koan using the X/OPEN interfaces:

June, 1990 Standards Update                Recent Standards Activities


				- 4 -

  #include <stdio.h>
  #include <nl_types.h>
  #include <locale.h>
  main()
  {
          nl_catd catd;

          (void)setlocale(LC_ALL, "");
          catd = catopen("hello", 0); /* error checking omitted for brevity */
          printf(catgets(catd, 1, 1,"hello, world\n"));
  }

and using the alternative, proposed UniForum interfaces:

  #include <stdio.h>
  #include <locale.h>
  main()
  {
          (void)setlocale(LC_ALL, "");
          (void)textdomain("hello");
          printf(gettext("hello, world\n"));
  }

I suppose if I had my druthers, I'd like to see a standard interface
that goes even farther than the UniForum proposal: one that adds a
default message catalogue/group (perhaps based on the name of the
program) and a standard, printf-family messaging function to hide the
explicit gettext() call, so the program could look like this:

  #include <stdio.h>
  #include <locale.h>
  #define printf printmsg
  main()
  {
          (void)setlocale(LC_ALL, ""); /* inescapable, required by ANSI C */
          printf("hello, world\n");
  }

but that would still be untested innovation.

The weather conditions in Colorado have made this a bonus year for
moths.  Every morning, our bathroom has about forty moths in it.
Stuck in our house, wanting desperately to get out, they fly toward
the only light that they can see and beat themselves to death on the
bathroom window.  I don't know what to tell them, either.

June, 1990 Standards Update                Recent Standards Activities


				- 5 -

1003.2: Shell and utilities

Someone surprised me at the April meeting by asserting that 1003.2
might be an important next target for the FORTRAN binding group.
(``What does that mean?'' I asked stupidly.  ``A standard for a
FORTRAN-shell?'') Perhaps you, like I, just think of dot two as
language-independent utilities.  Yes and no.

First, 1003.2 has over a dozen function calls (e.g., getopt()).  I
believe that most of these should be moved into 1003.1.  System() and
popen(), which assume a shell, might be exceptions, but having
sections of standards documents point at things outside their scope is
not without precedent.  Section 8 of P1003.1-1988 is a section of C-
language extensions, and P1003.5 will depend on the Ada standard.  Why
shouldn't an optional section of dot one depend on dot two?  Perhaps
ISO, already committed to re-grouping and re-numbering the standards,
will fix this.  Perhaps not.  In the meantime, there are functions in
dot two that need FORTRAN and Ada bindings.

Second, the current dot two standard specifies a C compiler.  Dot nine
has already helped dot two name the FORTRAN compiler, and may want to
help dot two add a FORTRAN equivalent of lint (which I've heard called
``flint'').  Dot five may want to provide analogous sorts of help
(though Ada compilers probably already subsume much of lint's
functionality).

Third, more subtle issues arise in providing a portable utilities
environment for programmers in other languages.  Numerical libraries,
like IMSL, are often kept as single, large source files with hundreds,
or even thousands, of routines in a single .f file that compiles into
a single .o file.  Traditional FORTRAN environments provide tools that
allow updating or extraction of single subroutines or functions from
such objects, analogous to the way ar can add or replace single
objects in libraries.  Dot nine may want to provide such a facility in
a FORTRAN binding to dot two.

Anyway, back to the working group.  They're preparing to go to ballot
on the UPE (1003.2a, User Portability Extensions).  The mock ballot
had pretty minimal return, with only ten balloters providing
approximately 500 objections.  Ten isn't very many, but mock ballot
for dot two classic only had twenty-three.  It seems that people won't
vote until they're forced to.

The collection of utilities in 1003.2a is fairly reasonable, with only
a few diversions from historic practice.  A big exception is ps(1),
where historic practice is so heterogeneous that a complete redesign
is possible.  Unfortunately, no strong logical thread links the
1003.2a commands together, so read the ballot with an eye toward
commands that should be added or discarded.

June, 1990 Standards Update                Recent Standards Activities


				- 6 -

A few utilities have already disappeared since the last draft.  Pshar,
an implementation of shar with a lot of bells and whistles, is gone.

Compress/uncompress poses an interesting problem.  Though the utility
is based on clear-cut existing practice, the existing implementation
uses an algorithm that is copyrighted.  Unless the author chooses to
give the algorithm away (as Ritchie dedicated his set-uid patent to
public use), the committee is faced with a hard choice:

   - They can specify only the user interface.  But the purpose of
     these utilities is to ease the cost of file interchange.  What
     good are they without a standard data-interchange format?

   - They can invent a new algorithm.  Does it make sense to use
     something that isn't field-tested or consistent with the versions
     already out there?  (One assumes that the existing version has
     real advantages, otherwise, why would so many people use a
     copyrighted version?)

Expect both the first real ballot of 1003.2a and recirculation of
1003.2 around July.  Note that the recirculation will only let you
object to items changed since the last draft, for all the usual bad
reasons.

1003.3: Test methods

The first part of dot three's work is coming to real closure.  The
last ballot failed, but my guess is that one will pass soon, perhaps
as soon as the end of the year, and we will have a standard for
testing conformance to IEEE 1003.1-1988.

That isn't to say that all is rosy in dot-one testing.  NIST's POSIX
Conformance Test Suite (PCTS) still has plenty of problems:
misinterpretations of dot one, simple timing test problems that cause
tests to run well on 3b2's, but produce bad results on a 30 mips
machine and even real bugs (attempts to read from a tty without first
opening it).  POSIX dot one is far more complex than anything for
which standard test suites have been developed to date.  The PCTS,
with around 2600 tests and 150,000 lines of code, just reflects that
complexity.  An update will be sent to the National Technical
Information Service (NTIS -- also part of the Department Commerce, but
not to be confused with NIST) around the end of September which fixes
all known problems but, with a suite this large, others are likely to
surface later.

By the way, NIST's dot one suite is a driver based on the System V
Verification Suite (SVVS), plus individual tests developed at NIST.
Work has begun on a suite of tests for 1003.2, based, for convenience,
on a suite done originally for IBM by Mindcraft.  It isn't clear how
quickly this work will go.  (For example, the suite can't gel until
dot two does.) For the dot one work, NIST made good use of Research

June, 1990 Standards Update                Recent Standards Activities


				- 7 -

Associates -- people whose services were donated by their corporations
during the test suite development.  Corporations gain an opportunity
to collaborate with NIST and inside knowledge of the test suite.  I
suspect Roger Martin may now be seeking Research Associates for dot
two test suite development.  If you're interested in doing this kind
of work, want to spend some time working in the Washington, D.C. area,
and think your company would sponsor you, his email address is
rmar...@swe.ncsl.nist.gov.

By the way, there are a variety of organizational and numbering
changes happening in dot three.  See Doris Lebovits's snitch report
for details.

The Steering Committee on Conformance Testing (SCCT) is the group to
watch.  Though they've evolved out of the dot three effort, they
operate at the TCOS level, and are about to change the way POSIX
standards look.  In response to the ever-increasing burden placed on
the testing committee, the SCCT is going to recommend that groups
producing new standards include in those standards a list of test
assertions to be used in testing them.

Groups that are almost done, like 1003.2, will be grandfathered in.
But what should be done with a group like dot four -- not far enough
along that it has something likely to pass soon, but far enough to
make the addition of major components to its ballot a real problem.
Should this case be treated like language independence?  If so,
perhaps dot four will also be first in providing test assertions.

1003.4: Real-time extensions

The base dot-four document has gone to ballot, and the ensuing process
looks like it may be pretty bloody.  Fifty-seven percent of the group
voted against the current version.  (One member speculated privately
that this meant forty-three percent of the balloting group didn't read
it.) Twenty-two percent of the group (nearly half of those voting
against) subscribed to all or part of a common reference ballot, which
would require that entire chapters of the document be completely
reworked, replaced, or discarded.  Subscribers to this common
reference ballot included employees of Unix International and the Open
Software Foundation, of Carnegie-Mellon University and the University
of California at Berkeley, and of Sun Microsystems and Hewlett-
Packard.  (USENIX did not ballot similarly, but only because of lack
of time.) Some of these organizations have never before agreed on the
day of the week, let alone the semantics of system calls.  But then,
isn't bringing the industry together one goal of POSIX?

Still, the document has not been returned to the working group by the
technical editors, so we can assume they feel hopeful about resolving
all the objections.  Some of this hope may come from the miracle of
formality.  I've heard that over half of the common reference ballot
could be declared non-responsive, which means that there's no

June, 1990 Standards Update                Recent Standards Activities


				- 8 -

obligation to address over half the concerns.

The threads work appears to enjoy a more positive consensus.  At least
two interesting alternatives to the current proposal surfaced at the
April meeting, but following a lot of discussion, the existing
proposal stood largely unchanged.  I predict that the threads work
which will go to ballot after the base, dot four document, will be
approved before it.  John Gertwagen, dot four snitch and chair of
UniForum's real-time technical committee, has bet me a beer that I'm
wrong.

1003.5: Ada bindings and 1003.9: FORTRAN-77 bindings

These groups are coming to the same place at the same time.  Both are
going to ballot and seem likely to pass quickly.  In each case, the
major focus is shifting from technical issues to the standards process
and its rules: forming balloting groups, relations with ISO, future
directions, and so on.

Here's your chance to do a good deed without much work.  Stop reading,
call someone you know who would be interested in these standards, and
give them the name of someone on the committee who can put them into
the balloting group.  (If nothing else, point them at our snitches for
this quarter: Jayne Baker c...@d74sun.mitre.org, for dot five, and
Michael Hannah mjha...@sandia.gov, for dot nine.) They'll get both a
chance to see the standard that's about to land on top of their work
and a chance to object to anything that's slipped into the standard
that doesn't make sense.  The more the merrier on this one, and they
don't have to go to any committee meetings.  I've already called a
couple of friends of mine at FORTRAN-oriented companies; both were
pleased to hear about 1003.9, and eager to read and comment on the
proposed standard.

Next up for both groups, after these standards pass, is negotiating
the IEEE standard through the shoals of ISO, both getting and staying
in sync with the various versions and updates of the base standard
(1003.1a, 1003.1b, and 9945-1), and language bindings to other
standards, like 1003.2 and 1003.4.  (See my earlier discussion of dot
two.) Notice that they also have the burden of tracking their own
language standards.  At least in the case of 1003.9, this probably
means eventually having to think about a binding to X3J3 (Fortran 90).

1003.6: Security

This group has filled the long-vacant post of technical editor, and,
so, is finally back in the standards business.  In any organization
whose ultimate product is to be a document, the technical editor is a
key person.  [We pause here to allow readers to make some obligatory
cheap shot about editors.] This is certainly the case in the POSIX
groups, where the technical editors sometimes actually write large
fractions of the final document, albeit under the direction of the

June, 1990 Standards Update                Recent Standards Activities


				- 9 -

working group.

I'm about to post the dot six snitch report, and don't want to give
any of it away, but will note that it's strongly opinionated and
challenges readers to find any non-DoD use for Mandatory Access
Control, one of the half-dozen areas that they're standardizing.

1003.7: System administration

This group has to solve two problems at different levels at the same
time.  On the one hand, it's creating an object-oriented definition of
system administration.  This high-level approach encapsulates the
detailed implementation of objects interesting to the system
administrator (user, file system, etc.), so that everyone can see them
in the same way on a heterogeneous environment.  On the other hand,
the protocol for sending messages to these objects must be specified
in detail.  If it isn't, manufacturers won't be able to create
interoperable systems.

The group as a whole continues to get complaints about its doing
research-by-committee.  It's not even pretending to standardize
existing practice.  I have mixed feelings about this, but am
unreservedly nervous that some of the solutions being contemplated
aren't even UNIX-like.  For example, the group has tentatively
proposed the unusual syntax object action.  Command names will be
names of objects, and the things to be done to them will be arguments.
This bothers me (and others) for two reasons.  First, this confuses
syntax with semantics.  You can have the message name first and still
be object-oriented; look at C++.  Second, it reverses the traditional,
UNIX verb-noun arrangement: mount filesystem becomes filesystem mount.
This flies in the face of the few existing practices everyone agrees
on.  I worry that these problems, and the resulting inconsistencies
between system administration commands and other utilities, will
confuse users.  I have a recurring nightmare of a long line of new
employees outside my door, all come to complain that I've forgotten to
mark one of my device objects, /dev/null, executable.

With no existing practice to provide a reality-check, the group faces
an uphill struggle.  If you're an object-oriented maven with a yen to
do something useful, take a look at what this group is doing, then
implement some of it and see if it makes sense.  Look at it this way:
by the time the standard becomes reality, you'll have a product, ready
to ship.

1003.10: Supercomputing

This group is working on things many of us us old-timers thought we
had seen the last of: batch processing and checkpointing.  The
supercomputing community, condemned forever to live on the edge of
what computers can accomplish, is forced into the same approaches we
used back when computer cycles were harder to come by than programmer
cycles, and machines were less reliable than software.

June, 1990 Standards Update                Recent Standards Activities


				- 10 -

Supercomputers run programs that can't be run on less powerful
computers because of their massive resource requirements
(cpu/memory/io).  They need batch processing and checkpointing because
many of them are so resource-intensive that they even run for a long
time on supercomputers.  Nevertheless, the supercomputing community is
not the only group that would benefit from standardization in these
areas.  (See, for example, my comments on dot fourteen.) Even people
who have (or wish to have) long-running jobs on workstations, share
some of the same needs for batch processing and checkpointing.

Karen Sheaffer, the chair of dot ten, had no trouble quickly recasting
the group's proposal for a batch PAR into a proposal that passed the
SEC's PAR-approval criteria.  The group is modeling a batch proposal
after existing practice, and things seem to be going smoothly.

Checkpointing, on the other hand, isn't faring as well.  People who
program supercomputers need to have a way to snapshot jobs in a way
that lets them restart the jobs at that point later.  Think, for
example, of a job that needs to run for longer than a machine's mean-
time-to-failure.  Or a job that runs for just a little longer than
your grant money lasts.  There are existing, proprietary schemes in
the supercomputing world, but none that's portable.  The consensus is
that a portable mechanism would be useful and that support for
checkpointing should be added to the dot one standard.  The group
brought a proposal to dot one b, but it was rejected for reasons
detailed in Paul Rabin's dot one report.  Indeed, the last I heard,
dot-one folks were suggesting that dot ten propose interfaces that
would be called from within the program to be checkpointed.  While
this may seem to the dot-one folks like the most practical approach,
it seems to me to be searching under the lamp-post for your keys
because that's where the light's brightest.  Users need to be able to
point to a job that's run longer than anticipated and say,
``Checkpoint this, please.'' Requiring source-code modification to
accomplish this is not only unrealistic, it's un-UNIX-like.  (A
helpful person looking over my shoulder has just pointed out that the
lawyers have declared ``UNIX'' an adjective, and I should say
something like ``un-UNIX-system-like'' instead.  He is, of course,
correct.) Whatever the interface is, it simply must provide a way to
let a user point at another process and say, ``Snapshot it,'' just as
we can stop a running job with job control.

1003.12: Protocol-independent interfaces

This group is still working on two separate interfaces to the network:
Simple Network Interface (SNI) and Detailed Network Interface (DNI).
The January meeting raised the possibility that the group would
coalesce these into a single scheme, but that scheme seems not to have
materialized.  DNI will provide a familiar socket- or XTI/TLI-like
interface to networks, while SNI will provide a simpler, stdio-like

June, 1990 Standards Update                Recent Standards Activities


				- 11 -

interface for programs that don't need the level of control that DNI
will provide.  The challenge of SNI is to make something that's simple
but not so crippled that it's useless.  The challenge of DNI is to
negotiate the fine line between the two competing, existing practices.
The group has already decided not to use either sockets or XTI, and is
looking at requirements for the replacement.  Our snitch, Andy
Nicholson, challenged readers to find a reason not to make DNI
endpoints POSIX file descriptors, but has seen no takers.

1003.14: Multiprocessing

The multiprocessing group, which had been meeting as sort of an ad-hoc
spin-off of the real-time group, was given PAR approval at the April
meeting as 1003.16 but quickly renamed 1003.14 for administrative
reasons.  They're currently going through the standard set of jobs
that new groups have to accomplish, including figuring out what tasks
need to be accomplished, whom to delegate them to, and how to attract
enough working-group members to get everything done.  If you want to
get in on the ground floor of the multiprocessing standard, come to
Danvers and volunteer to do something.

One thing that needs to be done is liaison work with other committees,
many of which are attacking problems that bear on multiprocessors as
well.  One example is dot ten's checkpointing work, which I talked
about earlier, Checkpointing is both of direct interest to dot
fourteen, and is analogous to several other problems the group would
like to address.  (A side-effect of the PAR proliferation problem
mentioned earlier is that inter-group coordination efforts go up as
the square of the number of groups.)

1201: Windows, sort of

Okay, as a review, we went into the Utah meeting with one official
group, 1201, and four unofficial groups preparing PARs:

  1.  1201.1: Application toolkit

  2.  1201.2: Recommended Practice for Driveability/User Portability

  3.  1201.3: User Interface Management Systems

  4.  1201.4: Xlib

By the end of the week, one PAR had been shot down (1201.3), one
approved (1201.2), and two remained unsubmitted.

The 1201.4 par was deferred because the X consortium says Xlib is
about to change enough that we don't want to standardize the existing
version.  I'll ask, ``If it's still changing this fast, do we want to
even standardize on the next version?'' The 1201.1 PAR was deferred
because the group hasn't agreed on what it wants to do.  At the

June, 1990 Standards Update                Recent Standards Activities


				- 12 -

beginning of the week, the two major camps (OSF/Motif and OPEN LOOK)*
had agreed to try to merge the two interfaces.  By mid-week, they
wouldn't even sit at the same table.  That they'd struck off in an
alternative, compromise direction by the end of the week speaks
extremely highly of all involved.  What the group's looking at now is
a toolkit at the level of XVT**: a layer over all of the current,
competing technologies that would provide portability without
invalidating any existing applications.  This seems like just the
right approach.  (I have to say this because I suggested it in an
editorial about six months ago.)

The 1201.3 PAR was rejected.  Actually, 1201 as a whole voted not to
submit it, but the people working on it felt strongly enough that they
submitted it anyway.  The SEC's consensus was that the field wasn't
mature enough to warrant even a recommended practice, but the work
should continue, perhaps as a UniForum Technical Committee.  The study
group countered that it was important to set a standard before there
were competing technologies, and that none of the attendees sponsoring
companies would be willing to foot the bill for their work within
anything but a standards body.  The arguments weren't persuasive.

The 1201.2 PAR, in contrast, sailed through.  What's interesting about
this work is that it won't be an API standard.  A fair fraction of the
committee members are human-factors people, and the person presenting
the PAR convinced the SEC that there is now enough consensus in this
area that a standard is appropriate.  I'm willing to believe this, but
I think that stretching the net of the IEEE's Technical Committee on
Operating Systems so wide that it takes in a human-factors standard
for windowing systems is overreaching.

X3

There are other ANSI-accredited standards-sponsoring bodies in the
U.S. besides the IEEE.  The best known in our field is the Computer
Business Equipment Manufacturers' Association (CBEMA), which sponsors
the X3 efforts, recently including X3J11, the ANSI-C standards
committee.  X3J11's job has wound down; Doug Gwyn tells me that
there's so little happening of general interest that it isn't worth a
report.  Still, there's plenty going on in the X3 world.  One example
is X3B11, which is developing a standard for file systems on optical
disks.  Though this seems specialized, Andrew Hume suggests in his

__________

  * OSF/Motif is a Registered Trademark of the Open Software
    Foundation.
    OPEN LOOK is a Registered Trademark of AT&T.

 ** XVT is a trademark of XVT Software Inc.

June, 1990 Standards Update                Recent Standards Activities


				- 13 -

report that this work may eventually evolve into a standards effort
for file systems on any read-write mass storage device.  See the dot-
four common reference ballot for the kind of feelings new file-system
standards bring out.

I encourage anyone out there on an X3 committee who thinks the
committee could use more user exposure and input to file a report.
For example, Doug Gwyn suggests that there is enough activity in the
C++ standards world to merit a look.  If anyone out there wants to
volunteer a report, I'd love to see it.

June, 1990 Standards Update                Recent Standards Activities

Volume-Number: Volume 20, Number 66

Path: gmdzi!unido!mcsun!uunet!mailrus!cs.utexas.edu!longway!std-unix
From: j...@usenix.org (Jeffrey S. Haemer)
Newsgroups: comp.std.unix
Subject: correction (compression algorithm patents)
Message-ID: <787@longway.TIC.COM>
Date: 5 Jul 90 21:24:17 GMT
References: <387@usenix.ORG>
Sender: std-u...@longway.TIC.COM
Reply-To: std-u...@uunet.uu.net
Lines: 69
Approved: j...@longway.tic.com (Moderator, John S. Quarterman)
Posted: Thu Jul  5 22:24:17 1990

From:  j...@usenix.org (Jeffrey S. Haemer)

Five people have now brought to my attention that my 
recent editorial says the compress/uncompress algorithm is 
copyrighted: Dave Grindelman, Guy Harris, Keith Bostic, Randall 
Howard, and Hugh Redelmeier.  That's wrong.  It isn't 
copyrighted, it's patented.  My apologies to anyone I mislead.  

Randall's note contains a lot of interesting details that it's worth posting
and he's given me permission to post it.
I've appended it.

Jeff

=====
[From Randall Howard]

    Actually the problem is not that the compress algorithm is copyrighted
but that it is PATENTED by Welch (the "W" in the LZW name of the algorithm).
The patent is currently held by Unisys Corporation and they make money
from licence fees on that patent because of the use of LZW encoding
in the new high-speed modems.  Note that the Lempel-Ziv algorithm
is apparently not patented, but just the Welch variant as is found in the
UNIX compress utility.  Therefore, at the cost of inventing a new file
compression standard, it would be possible to escape licence fees by
using a different variant of LZ compression.

	[Editor: Keith Bostic says both are patented:
	original Ziv-Lempel is patent number 4,464,650,
	and the more powerful LZW method is #4,558,302.
	He goes on to say, however, that LZW lacks adaptive table reset
	and other features in compress, and so may not apply.]

    The implications of this are that no one may produce the same
output as compress produces, regardless of the program that produced
that output, without being subject to the patent.  I.e., it is independent
of the actually coding used, unlike copyright.  Therefore, all of the PD
versions of compress are currently in violation, as is BSD.

    Representatives of Unisys at the POSIX.2 meetings claimed that
the Unisys Legal Department is pursuing the licensing of compress.  In fact,
unlike copyright or trade secret protection, patent protection does not
diminish because the holder of the patent is not diligent in seeking damages
or preventing unauthorized use.  Witness the large royalty payout by
Japanese semiconductor companies to Texas Instruments because they held
the patent on the concept of something as fundamental as integrated circuits.
This licence payout spans a period of over 20 years.  In addition,
Unisys representatives claim that Phil Karn's PKZIP, which uses the
LZW compression algorithm, is a licenced user of the Unisys patent and
that a fee (rumoured to be somewhere in the $10,000 to $20,000 US range)
has been paid up front in lieu of individual royalties.

    The ramifications for POSIX.2a are unclear.  Currently, there are members
of the working group that say that they would object if a patented
algorithm were required by the standard if ANY FEE WHATSOEVER (even if $1)
were required to use it.  (There are, however, precedents for standards
working in areas of patents in such areas as networking, modems, and
hardware bus structures.  It appears that we software people have not
"grown up" as much when it comes to issues of licensing.  Who has ever
hear of "public domain hardware"?)  Some people suggested that Unisys
should allow relatively free use of the patent but should profit from
publicity rights from a citation in every POSIX.2a product manual that
contains compress.  Therefore, there are currently negotiations underway
to see what kind of "special deal" Unisys would be willing to cut for the
use strictly in implementations of POSIX.2a.  Depending on the outcome
of these negotiations, compress would either be dropped, re-engineered,
or retained.

Volume-Number: Volume 20, Number 101