Tech Insider					     Technology and Trends


			      USENET Archives

Path: gmdzi!unido!mcsun!sunic!uupsi!rpi!uwm.edu!lll-winken!decwrl!shelby!
portia!underdog
From: under...@portia.Stanford.EDU (Dwight Joe)
Newsgroups: comp.arch
Subject: Next computer (Re: CISC Silent Spring)
Message-ID: <8859@portia.Stanford.EDU>
Date: 6 Feb 90 02:23:37 GMT
Sender: Dwight Joe <under...@portia.stanford.edu>
Reply-To: under...@portia.Stanford.EDU (Dwight Joe)
Organization: Gigantor Institute of Applied Science
Lines: 15
Posted: Tue Feb  6 03:23:37 1990

My suspicions are confirmed.  The NEXT computer is in trouble.

NEXT can only be saved if Steve Jobs replaces the 680X0
with RISC processor like the Sparc chip.  In all compute
intensive applications, the Sparcstation I beats the NEXT
timewise.  Worse, NEXT costs MORE than a Sparstation I.

Too, the extra gadetry (like the DSP chip) on the NEXT is
unlikely to be used by engineers doing compute-intensive 
applications.  The DSP might help out in making
a realistic video game; otherwise, its deadweight.
What difference does it make if you can play Beethoven's
fifth on the NEXT?  

I know.  Steve's going to upgrade the NEXT to a 68040.
Even then, the Sparc chip set is faster.

Path: gmdzi!unido!mcsun!uunet!pdn!oz!alan
From: a...@oz.nm.paradyne.com (Alan Lovejoy)
Newsgroups: comp.arch
Subject: Re: Next computer (Re: CISC Silent Spring)
Message-ID: <7341@pdn.paradyne.com>
Date: 6 Feb 90 15:38:06 GMT
References: <8859@portia.Stanford.EDU<
Sender: use...@pdn.paradyne.com
Reply-To: a...@oz.paradyne.com (Alan Lovejoy)
Organization: AT&T Paradyne, Largo, Florida
Lines: 41
Posted: Tue Feb  6 16:38:06 1990

In article <8...@portia.Stanford.EDU< under...@portia.Stanford.EDU 
(Dwight Joe) writes:
<My suspicions are confirmed.  The NEXT computer is in trouble.

Agreed.

<NEXT can only be saved if Steve Jobs replaces the 680X0
<with RISC processor like the Sparc chip.  In all compute
<intensive applications, the Sparcstation I beats the NEXT
<timewise.  Worse, NEXT costs MORE than a Sparstation I.

Well, he has lots of choices here.  I don't see why SPARC is to be
preferred, since it's just about the slowest of the RISC architectures.
Perhaps his cozy relationship with IBM will give him access to the
America (ROMPII) processor?  Of course, the completely unsubstantiated
rumors I have heard is that he's intending to go with the 88k.

<Too, the extra gadetry (like the DSP chip) on the NEXT is
<unlikely to be used by engineers doing compute-intensive 
<applications.  The DSP might help out in making
<a realistic video game; otherwise, its deadweight.
<What difference does it make if you can play Beethoven's
<fifth on the NEXT?  

Perhaps his target market is not engineers?  Whatever.  It's clear
that his prices, machine capabilities and marketing strategy are not 
in harmony.

<I know.  Steve's going to upgrade the NEXT to a 68040.
<Even then, the Sparc chip set is faster.

How do you know that?  Have you benchmarked a 68040?  Motorola claims that
the 68040 is faster, especially for floating point.  While Motorola is 
certainly biased, they do have one advantage:  they can run benchmarks
on real 68040's.  Since independent benchmarks are not yet available, perhaps 
it would be best to desist from making unsupported claims?


____"Congress shall have the power to prohibit speech offensive to Congress"____
Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.
Disclaimer: I do not speak for AT&T Paradyne.  They do not speak for me. 
Mottos:  << Many are cold, but few are frozen. >>     << Frigido, ergo sum. >>

Path: gmdzi!unido!mcsun!uunet!ns-mx!iowasp!deimos!ux1.cso.uiuc.edu!
brutus.cs.uiuc.edu!uakari.primate.wisc.edu!ames!sgi!da...@xtenk.sgi.com
From: da...@xtenk.sgi.com (David A Higgen)
Newsgroups: comp.arch
Subject: Re: Next computer (Re: CISC Silent Spring)
Message-ID: <49956@sgi.sgi.com>
Date: 7 Feb 90 07:06:54 GMT
References: <8859@portia.Stanford.EDU> <20571@watdragon.waterloo.edu>
Sender: da...@xtenk.sgi.com
Organization: Silicon Graphics, Inc., Mountain View, CA
Lines: 6
Posted: Wed Feb  7 08:06:54 1990

On a totally flippant note... after the XT in the IBM_PC world came,
of course, the AT. Now, if you apply the same substitution to NEXT, you
get... NEAT!! Can this be an accident??


				daveh

Path: gmdzi!unido!mcsun!uunet!aplcen!uakari.primate.wisc.edu!
zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!tut.cis.ohio-state.edu!
ucsd!helios.ee.lbl.gov!lbl-csam.arpa!antony
From: ant...@lbl-csam.arpa (Antony A. Courtney)
Newsgroups: comp.arch
Subject: Re: Next computer (Re: CISC Silent Spring)
Message-ID: <4791@helios.ee.lbl.gov>
Date: 7 Feb 90 10:15:50 GMT
References: <8859@portia.Stanford.EDU> <20571@watdragon.waterloo.edu> 
<49956@sgi.sgi.com>
Sender: use...@helios.ee.lbl.gov
Reply-To: ant...@lbl-csam.arpa (Antony A. Courtney)
Organization: Lawrence Berkeley Laboratory, Berkeley
Lines: 57
Posted: Wed Feb  7 11:15:50 1990
X-Local-Date: 7 Feb 90 02:15:50 PST

In article <49...@sgi.sgi.com> da...@xtenk.sgi.com (David A Higgen) writes:
>On a totally flippant note... after the XT in the IBM_PC world came,
>of course, the AT. Now, if you apply the same substitution to NEXT, you
>get... NEAT!! Can this be an accident??
>
>
>				daveh


More relevant to the discussion of the NeXT is probably the comparison of the
Lisa to the Mac.  The Lisa was slow, overpriced, and uncompetetive.  That
wasn't of much importance.  The machine was important because it was a machine
which people at apple could do R&D for.  The Macintosh embodied the design
concepts of the Lisa, but it was very clear that the fundamental mistakes the
engineers made were not repeated in the Mac.  If you look at the NeXT as a Lisa
of sorts, then it is a very good machine.  

The NeXT is also good because it embodies certain ideas which are very
important.  Sure, the NeXT has a DSP whether you want it or not.  That means
EVERYONE has stereo sound.  Sure, everyone may not want it.  But history has
shown that the best way to bring technology to the people is by REQUIRING it.
May be fascist, but it works.  Just look at what a mess things were in for a
while when PCs didn't come standard with a mouse.  Designers couldn't ASSUME
the user had a mouse, and that made the overhead of application writing very
extreme.  And look at workstations:  Every Sun has a 19" screen.  Imagine if
there were other screens available?  Do you think everyone would have bought
a 19" screen?  No way.  And what would that have done to the development of
window systems?  I suspect it would have hampered it severely...

In general, the Lisa idea is a very important one.  I think it is a very sound
practice to design something new and exciting and then do it again from the
ground up, once the engineers involved have learned a few things.  I think the
bext possible thing to do in the window system arena is to rm -rf
/usr/local/src/X and start over.  X is slow, clunky and it is a mess.  The
protocol is overly complex and there are several fundamental design errors.  I
think a lot of people recognize this.  But because everyone wants to
'standardize' X, there isn't any way to get away from it.  This same principle
is also what made UNIX so spiffy.  Researchers wrote Multics.  It sucked.  But
people learned an awful lot about what should and shouldn't be in an OS and
how to implement OSs.  Then people scrapped it and wrote UNIX based on things
which had been learned from previous OSs.  Imagine what the world would be like
if UNIX and any other technological developments in the OS arena had to conform
to a SMID--'Standard Multics Interface Definition'. :-)

I'll admit it isn't clear whethewr or not NeXT looks upon it's current box as a
Lisa.  If so, I look very forward to the next NeXT. :-)

		antony




--
*******************************************************************************
Antony A. Courtney				ant...@lbl.gov
Advanced Development Group			ucbvax!lbl-csam.arpa!antony
Lawrence Berkeley Laboratory			AACourt...@lbl.gov

Path: gmdzi!unido!mcsun!uunet!crdgw1!crdos1!davidsen
From: david...@crdos1.crd.ge.COM (Wm E Davidsen Jr)
Newsgroups: comp.arch
Subject: Re: Next computer (Re: CISC Silent Spring)
Message-ID: <2093@crdos1.crd.ge.COM>
Date: 7 Feb 90 13:37:52 GMT
References: <8859@portia.Stanford.EDU> <20571@watdragon.waterloo.edu> 
<49956@sgi.sgi.com> <4791@helios.ee.lbl.gov>
Reply-To: david...@crdos1.crd.ge.com (bill davidsen)
Organization: GE Corp R&D Center, Schenectady NY
Lines: 44
Posted: Wed Feb  7 14:37:52 1990

In article <4...@helios.ee.lbl.gov> ant...@lbl-csam.arpa (Antony A. Courtney) writes:

| May be fascist, but it works.  Just look at what a mess things were in for a
| while when PCs didn't come standard with a mouse.  Designers couldn't ASSUME
| the user had a mouse, and that made the overhead of application writing very
| extreme.  

  Whatever gave you the idea that PC come standard with a mouse? We have
about 1200 and I would guess <300 have a mouse. It's an extra cost
option on most systems.

|           And look at workstations:  Every Sun has a 19" screen.  Imagine if
| there were other screens available?  Do you think everyone would have bought
| a 19" screen?  No way.  And what would that have done to the development of
| window systems?  I suspect it would have hampered it severely...

  This will be a shock, but all Suns do not come standard with a 19 inch
screen, or we have been getting optional small monitors ;-) Almost all
of our Sun/4 and Sparcstations have the 15 inch color screen.

|                                                            This same principle
| is also what made UNIX so spiffy.  Researchers wrote Multics.  It sucked.  But
| people learned an awful lot about what should and shouldn't be in an OS and
| how to implement OSs.  Then people scrapped it and wrote UNIX based on things
| which had been learned from previous OSs.  

  I suspect that you have never used Multics and don't recall that UNIX
was written because there was not enough access to Multics. UNIX is just
beginning to implement some of the ideas which have been working in
Multics for two decades, such as mapping files to memory.

  The only reason Multics is not where UNIX is today is that it was
developed by one company which didn't know how to sell computers and
then rights went to another. If Multics had been ported to minis and
micros as soon as the hardware would support it, a lot of people running
it on large machines would use it on everything.

  There was some negotiation to buy the Multics rights from Honeywell
and port it to the 386 (you really need those four levels of privilege),
but I was told that Honeywell was afraid that it would cut into the GCOS
market. That's too bad.
-- 
bill davidsen	(david...@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
            "Stupidity, like virtue, is its own reward" -me

Path: gmdzi!unido!mcsun!uunet!wuarchive!cs.utexas.edu!jarvis.csri.toronto.edu!
utgpu!utzoo!henry
From: he...@utzoo.uucp (Henry Spencer)
Newsgroups: comp.arch
Subject: the Multics from the black lagoon :-)
Message-ID: <1990Feb7.221800.804@utzoo.uucp>
Date: 7 Feb 90 22:18:00 GMT
References: <8859@portia.Stanford.EDU> <20571@watdragon.waterloo.edu> 
<49956@sgi.sgi.com> <4791@helios.ee.lbl.gov> <2093@crdos1.crd.ge.COM>
Organization: U of Toronto Zoology
Lines: 38
Posted: Wed Feb  7 23:18:00 1990

In article <2...@crdos1.crd.ge.COM> david...@crdos1.crd.ge.com (bill davidsen) 
writes:
>  I suspect that you have never used Multics and don't recall that UNIX
>was written because there was not enough access to Multics.

This is a rather drastic oversimplification.  Unix was not just "Multics
on a low budget", it was also "90% of the benefits at 10% of the cost".
Remember that Multics and OS/360 are the two classic examples of second-
system effect (overconfidence after a successful first system leads to
vast complexity and a union-of-all-wishlists approach on the second).
With the benefits of hindsight and a much more manageable system, Unix
ended up taking a lot of ideas much farther than Multics did.

>UNIX is just
>beginning to implement some of the ideas which have been working in
>Multics for two decades, such as mapping files to memory.

Gee, how could we ever have lived without that for two decades?  :-)
Maybe because we don't need it and it doesn't buy us very much?  Unix
evolution is now largely controlled by the marketdroids, who evaluate
systems by the length of the checklist of features.  All sorts of
silly and bizarre features are now crawling out of the woodwork and
burrowing into Unix as a result.  This does not necesarily mean said
features are good or desirable or even useful.

>  The only reason Multics is not where UNIX is today is that it was
>developed by one company which didn't know how to sell computers and
>then rights went to another...

I can think of several other reasons, actually, starting with Multics
being much larger, being much fussier about memory management and
such, and performing very poorly by comparison.  Unix did not suddenly
spring into its position of prominence when hardware reached current
levels -- it steadily grew into it through ability to run well on small
machines (most Unix machines were small until very recently) and ability
to port to almost anything.  Multics had no hope of ever copying that.
-- 
SVR4:  every feature you ever |     Henry Spencer at U of Toronto Zoology
wanted, and plenty you didn't.| uunet!attcan!utzoo!henry he...@zoo.toronto.edu

Path: gmdzi!unido!mcsun!uunet!ns-mx!iowasp!deimos!ux1.cso.uiuc.edu!
brutus.cs.uiuc.edu!zaphod.mps.ohio-state.edu!tut.cis.ohio-state.edu!
ucbvax!decwrl!shelby!portia!underdog
From: under...@portia.Stanford.EDU (Dwight Joe)
Newsgroups: comp.arch
Subject: Re: Next computer (Re: CISC Silent Spring)
Message-ID: <8905@portia.Stanford.EDU>
Date: 7 Feb 90 00:50:42 GMT
Sender: Dwight Joe <under...@portia.stanford.edu>
Reply-To: under...@portia.Stanford.EDU (Dwight Joe)
Organization: Gigantor Institute of Applied Science
Lines: 78
Posted: Wed Feb  7 01:50:42 1990

In article <7...@pdn.paradyne.com> a...@oz.paradyne.com (Alan Lovejoy) 
writes:
|
|<NEXT can only be saved if Steve Jobs replaces the 680X0
|<with RISC processor like the Sparc chip.  In all compute
|<intensive applications, the Sparcstation I beats the NEXT
|<timewise.  Worse, NEXT costs MORE than a Sparstation I.
|
|Well, he has lots of choices here.

Agreed.  But Sparc now has a huge software base that could
easily be modified to fit the I/O of the NEXT, which BADLY
needs software, if NEXT were to incorporate the SPARC chipset.

|<Too, the extra gadetry (like the DSP chip) on the NEXT is
|<unlikely to be used by engineers doing compute-intensive 
|<applications.  The DSP might help out in making
|<a realistic video game; otherwise, its deadweight.
|<What difference does it make if you can play Beethoven's
|<fifth on the NEXT?  
|
|Perhaps his target market is not engineers?  Whatever.  It's clear
|that his prices, machine capabilities and marketing strategy are not 
|in harmony.
|

Ain't that the truth.  What I don't understand about NEXT is what
kind of major market is it supposed to be aimed at.  Some dude
mentioned that the world has musicians as well as engineers.  Fine.
But musicians seem like too small of a market for the kind of
sales projection that Jobs made.  Then, Jobs points out that the NEXT
is great for word-processing.  So what?  IBM/MAC can do a more than
adequate job.  Why spend close to 10K for a NEXT?

Jobs initially aimed his machine at higher education.  Now, he's
shooting for business.  I'll tell you why.  In the higher education
market, his machine loses twice:  high price and low performance.
People who do reports, write short stories, etc. aren't going to
shell out that kind of money for just word processing.  That covers
liberal arts.  Look at the other principal field:  the technoids.
Why would an engineering dept. pay close to 10K for a NEXT when 
the dept. can get a Sparcstation I for close to that price and
for an order of magnitude performance in number-crunching.

So, Jobs was squeezed out of the higher education market.  The
one thing that Jobs didn't count on back in mid 1980 was the
rise of the RISC machines.  What hurt him was the very long
product development time of NEXT.  He expected that back in mid-1980
(when he conceived of NEXT) there would be nothing like RISC.

The graphics (assuming no RISC machines) and the powerful
680x0 (assuming no RISC machines) would have been great for
running some snazzy graphics-filled simulations of say, molecular
reactions.  The DSP chip (assuming no RISC machines) would have
been great for signal processing experiments in EE.

Now, against RISC, NEXT with its 680x0 looks flimsy.  Most
of the RISC computers have great graphics and better number
crunching.  As for DSP, I see no reason why a DSP peripheral
can't be attached to one of these RISC computers.  The conclusion
is that RISC has effectively locked NEXT out of a major market.

The other major market is controlled by IBM/MAC.  So, back to
my original question, what kind of major market is NEXT supposed
to be aimed at?  What makes NEXT better than the dominant machines
in those markets?

Note that when MAC appeared, it had significant software features that
were not present in other machines.  (features = easy-to-use user
interface, incredibly easy to use word processing, icon-driven menus
etc)  But NEXT doesn't have any significant software feature that are
not present in other machines.

|<I know.  Steve's going to upgrade the NEXT to a 68040.
|<Even then, the Sparc chip set is faster.
|
|How do you know that?  Have you benchmarked a 68040?

source: Businessweek

Path: gmdzi!unido!mcsun!sunic!uupsi!rpi!zaphod.mps.ohio-state.edu!
samsung!uunet!zds-ux!gerry
From: ge...@zds-ux.UUCP (Gerry Gleason)
Newsgroups: comp.arch
Subject: Re: Next computer (Re: CISC Silent Spring)
Message-ID: <160@zds-ux.UUCP>
Date: 7 Feb 90 15:08:20 GMT
References: <8905@portia.Stanford.EDU>
Reply-To: ge...@zds-ux.UUCP (Gerry Gleason)
Organization: Zenith Data Systems
Lines: 75
Posted: Wed Feb  7 16:08:20 1990

In article <8...@portia.Stanford.EDU> under...@portia.Stanford.EDU 
(Dwight Joe) writes:
>In article <7...@pdn.paradyne.com> a...@oz.paradyne.com (Alan Lovejoy) 
writes:
>|<NEXT can only be saved if Steve Jobs replaces the 680X0
>|<with RISC processor like the Sparc chip.  In all compute
>|<intensive applications, the Sparcstation I beats the NEXT
>|<timewise.  Worse, NEXT costs MORE than a Sparstation I.

>|Well, he has lots of choices here.

>Agreed.  But Sparc now has a huge software base that could
>easily be modified to fit the I/O of the NEXT, which BADLY
>needs software, if NEXT were to incorporate the SPARC chipset.

Huh?  SPARC has a larger software base than 680x0?  Where have you
been?  If your talking about stuff for SunOS, sure, but NEXT was
never intended to leverage from that software base, or they could
be doing it now.  IMHO, software is NEXT's main problem.  Job's
decided to invend yet another wizzy system interface, and at a time
when the whole industry wants system software standards.  When the
MAC first came out there wasn't anything comparable, now there is,
so it is unlikely that the NEXT environment will have sufficient
software written for it.

>So, Jobs was squeezed out of the higher education market.  The
>one thing that Jobs didn't count on back in mid 1980 was the
>rise of the RISC machines.  What hurt him was the very long
>product development time of NEXT.  He expected that back in mid-1980
>(when he conceived of NEXT) there would be nothing like RISC.

And this really long product development is probably due to
his choice to reinvent the wheel in terms of software, but since
this is comp.arch, maybe we could discuss whether RISC was predictable
in mid-1980 (do you mean mid 1980's, 1980 seems much to early for
the inception of the NEXT concept).  By the mid 1980's it was very
clear than RISC would be an important technology.  In addition to
being a big selling point, a RISC processor would have lessened the
impact of the machines departure from software standards.

Another case in point, I had the opportunity to work on a project
using AT&T's CRISP processor, and was very surprised to find out that
a proposal to build this chip had been around since before they
built the first of the 32100 family.  By the time they put up the
resources to build the CRISP, the 32100 was well established and
Sun was nearly ready to market their SPARC strategy, so the project
fizzled.  Had the 32100 been built from the CRISP proposal, the
rest would be history, it would have been the first commercial
RISC based processor, and it would have become the porting base
for UNIX in the early 80's.  Of course who can really predict what
the market would have done, but such an early RISC processor would
have put the pressure on Intel and Motorola much earlier, perhaps
the 80386 would not have been built (or not been all that successful
since it would have been competing from day one with Intel RISC).

>|<I know.  Steve's going to upgrade the NEXT to a 68040.
>|<Even then, the Sparc chip set is faster.

>|How do you know that?  Have you benchmarked a 68040?

Has anyone seen a 68040?  I thought not.  You are comparing
a chip that won't ship until this summer with one that is in
a machine that has been in production for some time.  This
occurs over and over in the RISC/CISC debate, but that doesn't
seem to keep people from making these silly comparison's.

BTW, what are some current prices on RISC chips?  I have read that
80486's are ~950$ in thousand quantity, and someone posted 68040's
are expected to be ~750$.  I suppose you should include the MMU and
FPU in the RISC prices since they are on the chip for the comparable
CISC's, but since a large percentage of users don't need and FPU
including this unit probably distorts the comparison.  From day one
I expected RISC processors to get to commodity prices very quickly
(i.e. prices based almost completely on the cost to make to chip).
Has that happened yet?

Gerry Gleason

Path: gmdzi!unido!mcsun!uunet!samsung!zaphod.mps.ohio-state.edu!mips!
winchester!mash
From: m...@mips.COM (John Mashey)
Newsgroups: comp.arch
Subject: Re: Next computer (Re: CISC Silent Spring)
Message-ID: <35655@mips.mips.COM>
Date: 7 Feb 90 23:30:41 GMT
References: <8905@portia.Stanford.EDU> <160@zds-ux.UUCP>
Sender: n...@mips.COM
Reply-To: m...@mips.COM (John Mashey)
Organization: MIPS Computer Systems, Inc.
Lines: 74
Posted: Thu Feb  8 00:30:41 1990

In article <1...@zds-ux.UUCP> ge...@zds-ux.UUCP (Gerry Gleason) writes:
>In article <8...@portia.Stanford.EDU> under...@portia.Stanford.EDU 
(Dwight Joe) writes:
>>In article <7...@pdn.paradyne.com> a...@oz.paradyne.com (Alan Lovejoy) 
writes:
>>|<NEXT can only be saved if Steve Jobs replaces the 680X0
>>|<with RISC processor like the Sparc chip.  In all compute

>>So, Jobs was squeezed out of the higher education market.  The
>>one thing that Jobs didn't count on back in mid 1980 was the
>>rise of the RISC machines.  What hurt him was the very long
>>product development time of NEXT.  He expected that back in mid-1980
>>(when he conceived of NEXT) there would be nothing like RISC.

>this is comp.arch, maybe we could discuss whether RISC was predictable
>in mid-1980 (do you mean mid 1980's, 1980 seems much to early for
>the inception of the NEXT concept).  By the mid 1980's it was very
>clear than RISC would be an important technology.  In addition to
>being a big selling point, a RISC processor would have lessened the
>impact of the machines departure from software standards.

Why don't we just kill off all this silly speculation: all of this is
100% wrong: NeXT has certainly been aware of RISC very early.  At the time they
had to make their choice of processor [this was 1986/87], a 68030
was a very reasonable choice, as there was NO RISC available with:
	high performance
	low cost
	large volume supply
	with sure supplies
After all, at that point:
- Clipper performance wasn't that strong, and I'm not sure when the
Fairchild uncertainty was going on, but it might have been around then.
- MIPS was a 120-person company relying on foundries (not semi partners),
and NeXT would have been incredibly gutsy at that point to have used MIPS.
- SPARC wasn't announced yet, and the low level of integration of the
gate array designs surely would have exceeded NeXT cost goals.
- 88K was far away
- i860 was even further off.

Anyway, one might criticize them for not guessing how long the software
would take, and then trying to guess which RISCs would do well and
picking one of them, but I think it would have taken amazing precognition
in early 1987 to predict what's happened since... and amazing courage to
have bet the company on things not yet proven.  Had they started a year
later, maybe things would be different, but don't ding them by claiming
something was obvious, when it wasn't at all.

>
>Another case in point, I had the opportunity to work on a project
>using AT&T's CRISP processor, and was very surprised to find out that
>a proposal to build this chip had been around since before they
Yes, it is sad that CRISP didn't get out, as it at least had some elegant
and clever ideas.

>BTW, what are some current prices on RISC chips?  I have read that
>80486's are ~950$ in thousand quantity, and someone posted 68040's
>are expected to be ~750$.  I suppose you should include the MMU and
>FPU in the RISC prices since they are on the chip for the comparable
>CISC's, but since a large percentage of users don't need and FPU
>including this unit probably distorts the comparison.  From day one
>I expected RISC processors to get to commodity prices very quickly
>(i.e. prices based almost completely on the cost to make to chip).
>Has that happened yet?

For sure, in 10,000s, I've heard of 12.5MHZ R2000/R2010 chipsets
at $100 for the pair, which means you build a pretty reasonable cache &
memory interface [i.e., the whole core] for $200-$250.  This was a while ago,
so the higher clock rates are probably creeping down from the $10/mip (CPU)
that was being quoted a year back. (Note that such a configuration should
be usually faster on integer, and even faster on FP, than a 25MHz 486 with
external cache.)
-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	{ames,decwrl,prls,pyramid}!mips!mash  OR  m...@mips.com
DDD:  	408-991-0253 or 408-720-1700, x253
USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086

Path: gmdzi!unido!mcsun!uunet!crdgw1!crdos1!davidsen
From: david...@crdos1.crd.ge.COM (Wm E Davidsen Jr)
Newsgroups: comp.arch
Subject: Re: the Multics from the black lagoon :-)
Message-ID: <2106@crdos1.crd.ge.COM>
Date: 8 Feb 90 19:48:16 GMT
References: <8859@portia.Stanford.EDU> <20571@watdragon.waterloo.edu> 
<49956@sgi.sgi.com> <4791@helios.ee.lbl.gov> <2093@crdos1.crd.ge.COM> 
<1990Feb7.221800.804@utzoo.uucp>
Reply-To: david...@crdos1.crd.ge.com (bill davidsen)
Organization: GE Corp R&D Center, Schenectady NY
Lines: 56
Posted: Thu Feb  8 20:48:16 1990

In article <1990Feb7.221800....@utzoo.uucp> he...@utzoo.uucp (Henry Spencer) 
writes:

| Gee, how could we ever have lived without that for two decades?  :-)
| Maybe because we don't need it and it doesn't buy us very much?  

  There are lot of things we don't *need* which really improve life to
have. Having used direct mapping of files on multics and VMS, I can say
that in those cases the logic was simpler, the source smaller and more
readable, the executable smaller, and the performance better.

  File mapping, for background, means treating a file like an array. It
eliminates the explicit calls to seek and restrictions about mixing read
and write without an intervening seek. It allows elements from the file
to be used in expressions without lots of explicit file io, and replaces
the runtime and kernel buffering scheme with the kernel page mechanism,
which is often much faster.

  In VMS the performance gain was about 30%, and the program was smaller
as well. We lived without many things, and I thought you were a defender
of a lot of them, such as C standards, symbolic debuggers, etc. You're
right that we can live without them, but totally wrong about "it soesn't
buy us very much."

  Perhaps someone who has used the BSD mapping (is is mmap()?) could
give us some actual timings on unix, since my experience is with other
systems. Is mapping in V.4???

                                Examples
I/O code:

  /* update the current record */
  temp.units = 4;
  fseek(wkfile, (long)currec * sizeof(temp), 0);
  fwrite((char *)&temp, sizeof(temp), 1, wkfile);
  currec++; /* write moved the record pointer */
  /* read the next, fseek to allow read after write */
  fseek(wkfile, 0L, 1); /* NULL seek */
  fread((char *)&temp, sizeof(temp), 1, wkfile);
  /* use the value */
  m = 20 * temp.headlimit;

mapped:
  /* something *like* this does the mapping, like open */
  work = (struct demo *)mmap(filename, "r+");

  /* here's the actual code */
  work[currec++].units = 4;
  m = 20 * work[currec].headlimit;
________________________________________________________________

  Okay, I typed it in, please don't tell me there are typos or I left
out something, unless it is major. I just think this makes really easier
code to write and maintain.
-- 
bill davidsen	(david...@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
            "Stupidity, like virtue, is its own reward" -me

Path: gmdzi!unido!mcsun!uunet!samsung!think!barmar
From: bar...@think.com (Barry Margolin)
Newsgroups: comp.arch,comp.misc
Subject: Re: the Multics from the black lagoon :-)
Message-ID: <33823@news.Think.COM>
Date: 8 Feb 90 22:48:50 GMT
References: <8859@portia.Stanford.EDU> <20571@watdragon.waterloo.edu> 
<49956@sgi.sgi.com> <4791@helios.ee.lbl.gov> <2093@crdos1.crd.ge.COM> 
<1990Feb7.221800.804@utzoo.uucp>
Sender: n...@Think.COM
Followup-To: comp.misc
Organization: Thinking Machines Corporation, Cambridge MA, USA
Lines: 85
Posted: Thu Feb  8 23:48:50 1990

O boy!  In case we weren't having enough fun NeXT-bashing, now it's time to
bash poor, defenseless Multics....  I've directed followups to comp.misc,
because this isn't really about architecture.

In article <1990Feb7.221800....@utzoo.uucp> he...@utzoo.uucp (Henry Spencer) 
writes:
>In article <2...@crdos1.crd.ge.COM> david...@crdos1.crd.ge.com (bill davidsen) 
writes:
>>  The only reason Multics is not where UNIX is today is that it was
>>developed by one company which didn't know how to sell computers and
>>then rights went to another...
>I can think of several other reasons, actually, starting with Multics
>being much larger, being much fussier about memory management and
>such, and performing very poorly by comparison.  Unix did not suddenly
>spring into its position of prominence when hardware reached current
>levels -- it steadily grew into it through ability to run well on small
>machines (most Unix machines were small until very recently) and ability
>to port to almost anything.  Multics had no hope of ever copying that.

Everything you say about Multics is true.  It was heavily dependent on the
architecture of the Honeywell (formerly GE) mainframes.  For most of its
lifetime about the only other machines with the necessary features were the
IBM 370 and its followons and Burroughs systems.  The first microprocessor
that could reasonably support a Multics-like OS was the 80386 (a friend of
mine claims to have implemented a small Multics clone for the 386).

But I don't think this was the reason why Multics didn't sell well.  For
most of the 70's, the points you make were not very important
considerations in the mainframe computer marketplace.  GCOS wasn't
portable, and ran on almost the same processors as Multics, yet Honeywell
was able to sell orders of magnitude more GCOS systems than Multics
systems.  I've heard of cases where a Honeywell customer wanted to buy a
Multics system and the salesman tried to talk him into GCOS!

Your points would explain why Multics didn't sell well into the small
business market, but no one ever claimed that was Multics's target market.
Multics was designed to be a computer utility, taking advantage of
economies of scale (is symmetrical multiprocessing one of those things we
don't really need?).  Compare it to other utilities, such as phone systems:
there are different considerations when designing a PBX than a central
office switch.  5ESS wouldn't be expected to be portable to the hardware
for an office PBX.  The question isn't why PDP-11 Unix sold better than
Multics, but why VM/CMS did.

Regarding whether Multics's feature are "needed", over the years I've been
amazed at the number of requests for facilities I've seen in the Unix
newsgroups where Multics has had that feature for fifteen years.  Things
like prioritized printing and batch queues, better process priority
control, useful library routines (e.g. for wildcard matching), decent
backup facilities, interprocess communication, and user-mode device
drivers.  One might argue that the hardware-supported security mechanisms
aren't really required, but the hardware support is precisely what is
needed to protect against viruses efficiently (see comp.virus for
discussions about hardware support to limit the capabilities of programs to
modify other programs); the originial Multics implementation did most of it
in software, and there were known security holes that were closed when
hardware rings were implemented.

And Multics's performance problems weren't directly related to its
features.  The Honeywell processors on which Multics ran lagged behind the
industry in performance.  This may have been partly because the additional
features Multics required had to be grafted on after the processor was
designed for GCOS.  Around the time Multics was cancelled they finally
designed a processor for GCOS on which Multics could have run directly
(actually, it had a minor problem with null pointers -- it would fault upon
loading (not indirecting through) them), but Multics was never ported to
it.

Actually, the hardest thing about porting Multics to an entirely different
architecture would have been fixing all the "fixed bin (17)" and "fixed bin
(35)" declarations in nearly every single PL/I source file.  Demand-paged
virtual memory is about all it really requires from the hardware, and no
one would think of doing a serious timesharing system without this these
days.  The dynamic linker could be reimplemented without the special
hardware support that the Honeywell processors provided.  And hardware
segmentation isn't really necessary; a flat address space can be treated as
segmented merely by calling the high-order N address bits the segment
number, and using software to manipulate all the pages of a software
segment as a unit; hardware segmentation is mostly a way to implement
larger address spaces with smaller MMU tables, since many of the attributes
of a page are common to all the pages in a segment (such as access modes
and location on disk).
--
Barry Margolin, Thinking Machines Corp.

bar...@think.com
{uunet,harvard}!think!barmar

Path: gmdzi!unido!mcsun!uunet!seismo!ukma!tut.cis.ohio-state.edu!rutgers!
jarvis.csri.toronto.edu!utgpu!watserv1!watmath!att!dptg!ulysses!andante!alice!dmr
From: d...@alice.UUCP
Newsgroups: comp.arch
Subject: Re: the Multics from the black lagoon :-)
Message-ID: <10468@alice.UUCP>
Date: 12 Feb 90 08:17:36 GMT
Organization: AT&T Bell Laboratories, Murray Hill NJ
Lines: 119
Posted: Mon Feb 12 09:17:36 1990

These Multics interchanges are enchanting; such a delightful
combination of thinly-veiled envy, nostalgia, and the occasional
gloriously stupid claim.

About three years ago there was another such discussion, and I wrote
a short essay on what people were saying then.  I can't remember
if I ever posted it; probably not, since it sort of trails off
without ending in much of a conclusion.  Perhaps it's time to trot
it out now and get some (more?) use out of it.

	Dennis Ritchie
	d...@research.att.com
	att!research!dmr

Subject: who needs Multics?  [written early 1987]

I've been following the Multics discussions with considerable interest.
I was involved in Multics at an early stage and retain strong, though
by now sightly fuzzy impressions of it.  Obviously, they have been
influenced by subsequent heavy involvement in Unix.

One might characterize Multics as a system that tried to do everything,
that had a grand conception of a new order for the world, and then
had to contract considerably as various realities intruded.  Ultimately,
it seems, it has had to contract into nothing, although during its
lifetime, it did achieve a great many of its design goals.  And it
has been quite influential.

Unix, by contrast, started with a modest but exceedingly well-conceived
design that has, so far at least, been able to accommodate enormous
expansion in various directions, only some of which aim towards
the most characteristic features of Multics.

We have always been assiduous in acknowledging a strong debt to
Multics and its immediate predecessor (CTSS); still, many components
of this connection are by now so thoroughly assimilated into the
culture that it is hard even to see them.  Hugh LaMaster mentions
TSS, IBM's answer to Multics.  TSS did emulate some aspects of Multics;
in particular it approximated the single-level store discussed below,
but in other, crucial aspects it utterly missed the point.  For example, TSS
poisoned the user (and program) interface with JCLish, IBMish DD cards
describing all sorts of wretched, irrelevant facts about files, instead
of doing the job properly.

The most characteristic feature of Multics, the design aspect that
strikes one most strongly, is indeed the single-level store: the
notion that files (in a directory hierarchy) are identified with segments
(parts of the address space).  Other systems have done this since
and perhaps even before, but Multics is the system that tried most
publicly and boldly.  I think the effort was admirable, was worth
trying, and may still hold life, but seems now to be flawed.

Even though the underlying mechanisms were unified, there are really
two separate aspects to the single-level store: program and data.

The program aspect is the dynamic linking facilities that have
been discussed (and envied) in this forum.  Briefly, it means that
use of a name in call-position `func()' causes the system to
search, at runtime, in specifiable places, for a file containing the
appropriate entry point, to attach the file to the address space,
and to complete the linkage to the function.  This is slow the first
time, rapid thereafter.

All in all, this was made to work well, and the effect is more
elegant, transparent, and general than the `shared libraries'
that one finds in many systems.  It was however never fast enough in
practice to be the universal mechanism for linking; very early
in the game, the `binder' had to be introduced that bundled
together libraries and commands in advance, to avoid the overhead of
doing it every time the command was executed.

At least in the early days, another, and important, Multics compromise
had especially evil consequences.  The original design called for each
command to be executed in a new process, as is done in Unix.  This was
much too slow, largely owing to dynamic linking.  (I remember the time
to start a new process going from perhaps 20 CPU minutes, to a few
seconds at the time we left the project.)  Therefore, in Multics all
one's commands execute in the same process, and use the same address
space.  This was fine so long as you were not changing any of the
programs you were running.  Say you recompiled the sine routine and ran
your test program again.  At best, the program would continue to use
the old sine because that version was already linked into your address
space (this was merely confusing); at worst, the test program would
jump into the wrong place in the new sine routine (the segment
contents were replaced but the old offset remained in the linkage
table).  This effect was known as "the KST problem" (KST= Known Segment
Table) and the result was called "being KSTed."  I am certain that it
was papered over in later versions of the system, and modern Multics
users may not even be aware of the problem, but it was a real pain for
us.  (Our general fix was to type hcs_$newproc and go get a cup of
coffee.)

In spite of the problems (they were eventually alleviated) the
dynamic linking of programs probably can be counted a success.
I don't thing the same is true of data.  It is generally a pleasant,
and almost universally applicable abstraction to imagine
other programs appearing, and thereafter statically living in your
own program's address space.  The same abstraction simply fails
in the case of data.

Other people have pointed out the problems already; I'll reiterate:
1) Much data comes from devices that cannot convincingly be mapped
   (terminals, tapes, raw disks, pipes)
2) In the state of technology, even plain files cannot be mapped
   properly, because they are too big
(I might need some correction on the second point, but I don't
see how the Multics machine could deal transparently with
segments larger than 256KW.)

What actually happened was that, for the most part, people
avoided the "single-level store" for data and used sequential
IO via read and write calls.  The Multics IO system was quite
snazzy, and one of the first things we did with it was to write
the "fsim"-- the file system interface module, that initiated your
segment and put/got bytes in it, and did all the
grotty but necessary things like set_bit_count.  In other words,
as a "feature," occasional use of data file mapping was convenient,
but as an organizing principle, as a way of life, it was a bust;
it was something that had to be programmed around.

Path: gmdzi!unido!mcsun!uunet!tut.cis.ohio-state.edu!cs.utexas.edu!mailrus!
jarvis.csri.toronto.edu!utgpu!utzoo!henry
From: he...@utzoo.uucp (Henry Spencer)
Newsgroups: comp.arch
Subject: Re: the Multics from the black lagoon :-)
Message-ID: <1990Feb12.204658.18336@utzoo.uucp>
Date: 12 Feb 90 20:46:58 GMT
References: <8859@portia.Stanford.EDU> <20571@watdragon.waterloo.edu> 
<49956@sgi.sgi.com> <4791@helios.ee.lbl.gov> <2093@crdos1.crd.ge.COM> 
<1990Feb7.221800.804@utzoo.uucp> <2106@crdos1.crd.ge.COM>
Organization: U of Toronto Zoology
Lines: 23
Posted: Mon Feb 12 21:46:58 1990

In article <2...@crdos1.crd.ge.COM> david...@crdos1.crd.ge.com (bill davidsen) 
writes:
>  File mapping... replaces
>the runtime and kernel buffering scheme with the kernel page mechanism,
>which is often much faster.
>  In VMS the performance gain was about 30%...

I'm afraid my reaction to this is "what would the performance gain have
been if the same effort had been put into speeding up normal I/O"?
Modulo one or two requirements like alignment, if there is *any* real
performance difference with only one process involved, it means you
are comparing apples to oranges -- either the kernel I/O code has not
been optimized to exploit the MMU, or you are comparing an I/O version
which works (say) 512 bytes at a time to a mapped version that grabs
the whole file at once.  When only one process is involved, read() and
write() do everything that mmap() does, and there is no reason why they
should be any slower.

I agree that the situation is a bit messier when multiple processes are
involved... but interprocess communication is a different issue and a
much messier one anyway.
-- 
SVR4:  every feature you ever |     Henry Spencer at U of Toronto Zoology
wanted, and plenty you didn't.| uunet!attcan!utzoo!henry he...@zoo.toronto.edu

Path: gmdzi!unido!mcsun!ukc!dcl-cs!aber-cs!pcg
From: p...@aber-cs.UUCP (Piercarlo Grandi)
Newsgroups: comp.arch
Subject: Re: the Multics from the black lagoon :-)
Summary: Mapping is simpler than reading, but argument is complex.
Message-ID: <1635@aber-cs.UUCP>
Date: 14 Feb 90 17:53:58 GMT
Reply-To: p...@cs.aber.ac.uk (Piercarlo Grandi)
Organization: Dept of CS, UCW Aberystwyth
	(Disclaimer: my statements are purely personal)
Lines: 84
Posted: Wed Feb 14 18:53:58 1990

In article <131...@sun.Eng.Sun.COM> l...@sun.UUCP (Larry McVoy) writes:
    In article <1990Feb12.204658.18...@utzoo.uucp> he...@utzoo.uucp 
(Henry Spencer) writes:
    >When only one process is involved, read() and
    >write() do everything that mmap() does, and there is no reason why they
    >should be any slower.
    
    Let's see, 
    
    read:
    	    get the block from disk
    	    copy it out to the user buffer
    	    return
    
    mmap:
    	    get the block from disk
    	    map it into the user's address space
    	    return
    
    It seems to me that there is in extra copy in there.  In kernels that I've
    profiled, this copy shows up very high on the list (in the top 2 or 3).

Actually this argument has twists and turns:

1) PDP-11/70 vs. VAX-11/780 running unix; nominally the two machines were
about as fast, but the PDP was running Unix about 20-30% slower.  Why?
because of 16 bit vs. 32 bit copies. As the saying goes, 'Unix is
core-to-core copy bandwidth limited' (buffer cache). Things can get
ridiculous if you also consider copying to/from stdio buffers. Traditional
Unix routinely copies a byte four times to move it between two processes
connected by a pipe; even this horrid overhead is usually shadowed by the
overhead of flattening a real datastructure to pass it thru a stream and
deflatten it at the other end.

2) If the read in on a suitable boundary, it can be done with copy on
write, in suitable conditions -- this can be easily arranged under Unix,
because most IO is via stdio, and stdio can be trivially modified to
allocate its buffers on suitably aligned boundaries. This could obviate
two of the copies mentioned above.

3) If you share a datastructure with mmap, it will in general be mapped at
different addresses, so it cannot contain absolute pointers. This is a
problem, with several solutions I have described in another article.  Note
that all these solutions are less expensive than flattening and deflattening
again.

4) Copy on write is bad, because it complicates quite a bit life to the
virtual memory module. Shared memory complicates it also as much, so if
you have shared memory (boo!) adding copy on write is not that big a deal.

5) read/write give you a fundamentally streamy look of the world, mmap
gives you a fundamentally arraysh one. Unix has too much of a stream
orientation, because if it is a paper-tape oriented system. This causes
problems everywhere, especially with databases and similar things.

6) a lot of machines perform particularly poorly on core-to-core copies
because of problems with CPU or bus cache writes. In particular, if writes
are not cache line sized and *aligned*, things can get pretty slow. Better
to avoid the issue entirely.

7) The idea of not sharing or copying on write memory segments, but just
pass around unique pointers to their page tables (MUSS) is so much better
and simpler to implement... A file server process, receiving a request for a
file, would ask the nucleus for a new (virtual!) page table, it would stuff
it with pointers to the disc blocks for the file, and send it back to the
requesting process. The requesting process could ask the nucleus to map the
segment by asking it to use the page table describing it, and then pages
from the segment would be faulted in as the process referenced them. If the
process does not need to refer to such pages, e.g. because it is just a name
server, the page table capability need never be used for mapping, and could
be passed on to another process without any page having ever been actually
referenced. You need no shared memory, no copy on write, no very large
address spaces.

As you see the argument is not simple. My own hunch is that mmap is
definitely superior for things that require an arraysh view of the world, as
it encourages taking notice of important things like the existence of pages,
the problems with locality, etc... It is by no means the only view of the
world, though. Terminals, tapes, etc.. you cannot mmap() (even if Mach tries
hard with "memory objects", fake memory segments). You have to live
with multiple paradigms or adopt the LCD of them.
-- 
Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac...@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcvax!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: p...@cs.aber.ac.uk

Path: gmdzi!unido!mcsun!uunet!jarthur!usc!zaphod.mps.ohio-state.edu!think!
snorkelwacker!apple!rutgers!bpa!cbmvax!snark!eric
From: e...@snark.uu.net (Eric S. Raymond)
Newsgroups: comp.arch
Subject: Re: the Multics from the black lagoon :-)
Message-ID: <1VTXRt#122Ovx=eric@snark.uu.net>
Date: 16 Feb 90 18:14:00 GMT
References: <1635@aber-cs.UUCP>
Lines: 17
Posted: Fri Feb 16 19:14:00 1990

In <1...@aber-cs.UUCP> Piercarlo Grandi wrote:
> 5) read/write give you a fundamentally streamy look of the world, mmap
> gives you a fundamentally arraysh one. Unix has too much of a stream
> orientation, because if it is a paper-tape oriented system. This causes
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> problems everywhere, especially with databases and similar things.

Do I really need to observe that no UNIX has even had *support* for paper
tape since at least V7? And that (assuming you weren't being literal) file
random-access is better integrated into UNIX than any other production OS?

Mr. Grandi, you never cease to amaze me. Your capacity for spicing apparently
incisive analysis with occasional flashes of utter idiocy...I really can't
figure it. I wish I knew whether you're a genius with an occasional short in
the brain or a complete moron with a persuasive line of patter.
-- 
      Eric S. Raymond = e...@snark.uu.net    (mad mastermind of TMN-Netnews)

Path: gmdzi!unido!mcsun!sunic!uupsi!rpi!zaphod.mps.ohio-state.edu!mips!apple!
sun-barr!newstop!sun!snafu!lm
From: l...@snafu.Sun.COM (Larry McVoy)
Newsgroups: comp.arch
Subject: Re: the Multics from the black lagoon :-)
Message-ID: <132011@sun.Eng.Sun.COM>
Date: 18 Feb 90 23:43:45 GMT
References: <1635@aber-cs.UUCP> <1VTXRt#122Ovx=eric@snark.uu.net>
Sender: n...@sun.Eng.Sun.COM
Reply-To: l...@sun.UUCP (Larry McVoy)
Organization: Sun Microsystems, Mountain View
Lines: 14
Posted: Mon Feb 19 00:43:45 1990

In article <1VTXRt#122Ovx=e...@snark.uu.net> e...@snark.uu.net (Eric S. Raymond) 
writes:
>Mr. Grandi, you never cease to amaze me. Your capacity for spicing apparently
>incisive analysis with occasional flashes of utter idiocy...I really can't
>figure it. I wish I knew whether you're a genius with an occasional short in
>the brain or a complete moron with a persuasive line of patter.
>-- 
>      Eric S. Raymond = e...@snark.uu.net    (mad mastermind of TMN-Netnews)

This from the guy that was trying to tell Dennis Ritchie about C?  Looks like
the pot calling the kettle black to me.
---
What I say is my opinion.  I am not paid to speak for Sun, I'm paid to hack.
    Besides, I frequently read news when I'm drjhgunghc, err, um, drunk.
Larry McVoy, Sun Microsystems     (415) 336-7627       ...!sun!lm or l...@sun.com

Path: gmdzi!unido!mcsun!uunet!cbmvax!snark!eric
From: e...@snark.uu.net (Eric S. Raymond)
Newsgroups: comp.arch
Subject: slangin' (was: Re: the Multics from the black lagoon :-))
Message-ID: <1VWGWv#6sjF4d=eric@snark.uu.net>
Date: 19 Feb 90 17:17:48 GMT
References: <1635@aber-cs.UUCP> <1VTXRt#122Ovx=eric@snark.uu.net> 
<132011@sun.Eng.Sun.COM>
Sender: e...@snark.uu.net (Eric S. Raymond)
Lines: 16
Posted: Mon Feb 19 18:17:48 1990

In <132...@sun.Eng.Sun.COM> Larry McVoy wrote:
> This from the guy that was trying to tell Dennis Ritchie about C?  Looks like
> the pot calling the kettle black to me.

Get yer facts straight, bucko. The person I flamed in that wretched incident
was Robert Firth -- I didn't know dmr had gotten involved, and I posted an
apology explaining the cause of my error immediately when I found out I'd been
wrong.

Several of the people who'd been quickest to jump on my case subsequently
commended my handling of the followup. I ain't perfect, but I am honest and
prompt about acknowledging my mistakes.

You now have a chance to demonstrate the same virtue. ;-)
-- 
      Eric S. Raymond = e...@snark.uu.net    (mad mastermind of TMN-Netnews)

			        About USENET

USENET (Users’ Network) was a bulletin board shared among many computer
systems around the world. USENET was a logical network, sitting on top
of several physical networks, among them UUCP, BLICN, BERKNET, X.25, and
the ARPANET. Sites on USENET included many universities, private companies
and research organizations. See USENET Archives.

		       SCO Files Lawsuit Against IBM

March 7, 2003 - The SCO Group filed legal action against IBM in the State 
Court of Utah for trade secrets misappropriation, tortious interference, 
unfair competition and breach of contract. The complaint alleges that IBM 
made concentrated efforts to improperly destroy the economic value of 
UNIX, particularly UNIX on Intel, to benefit IBM's Linux services 
business. See SCO vs IBM.

The materials and information included in this website may only be used
for purposes such as criticism, review, private study, scholarship, or
research.

Electronic mail:			       WorldWideWeb:
   tech-insider@outlook.com			  http://tech-insider.org/