Tech Insider					   Technology and Trends


			   USENET Archives


Electronic mail:			      WorldWideWeb:
   tech-insider@outlook.com		         http://tech-insider.org/

Xref: gmd.de comp.unix.large:616 comp.arch.storage:2160 comp.sys.dec:
12486 comp.unix.osf.osf1:1777 comp.unix.osf.misc:855
Newsgroups: comp.unix.large,comp.arch.storage,comp.sys.dec,
comp.unix.osf.osf1,comp.unix.osf.misc
Path: gmd.de!newsserver.jvnc.net!howland.reston.ans.net!vixen.cso.uiuc.edu!
newsrelay.iastate.edu!news.iastate.edu!john
From: jo...@iastate.edu (John Hascall)
Subject: Big I/O or Kicking the Mainframe out the Door
Message-ID: <CIAG8s.62C@news.iastate.edu>
Followup-To: comp.unix.large,comp.arch.storage,comp.sys.dec
Sender: ne...@news.iastate.edu (USENET News System)
Organization: Iowa State University, Ames, IA
Date: Sun, 19 Dec 1993 15:26:52 GMT
Lines: 37

We would very dearly like to kick our `eat us out of house and home'
mainframe out the door.  After our library-automation system's port
to Unix is finished, our mainframes only real attraction (neglecting
the `I don't want to learn anything new' hangers-on), is it's I/O
capabilities.

Hence, I come to you with the following questions.  Does anyone have
any experience with or pointers to these items for workstations
(ideally for DEC Alpha boxes running OSF/1):

    * Disk systems faster than SCSI (IPI?  Raid?  ???)

    * 3480 cartridge tape devices whose speed is
      similar to the mainframe versions AND which
      are robust enough for hundreds of tape operations
      a day (insertions/deinsertions)

    * 9-track tapes drives with the similar speed
      and robustness.

    * also information on silo technology (either 3480
      or other media).   We currently have some 2000
      3480 cartridges (probably 90+% of the requests
      could be satisfied with a 500-unit silo though,
      leaving the rest for the operator).  We also
      have about 700 4mm DAT tapes we use currently
      for backing up our Unix system currently.

Thanks in advance for any information (vendor replies
welcome too, BTW),
John

-- 
John Hascall                   ``An ill-chosen word is the fool's messenger.''
Systems Software Engineer
Project Vincent
Iowa State University Computation Center  +  Ames, IA  50011  +  515/294-9551

Xref: gmd.de comp.unix.large:617 comp.arch.storage:2161 comp.sys.dec:12487
Newsgroups: comp.unix.large,comp.arch.storage,comp.sys.dec
Path: gmd.de!newsserver.jvnc.net!darwin.sura.net!howland.reston.ans.net!
xlink.net!scsing.switch.ch!swidir.switch.ch!dxcern!dxcern.cern.ch!tbel
From: tb...@oahu.cern.ch (Tim Bell)
Subject: Re: Big I/O or Kicking the Mainframe out the Door
In-Reply-To: john@iastate.edu's message of Sun, 19 Dec 1993 15:26:52 GMT
Message-ID: <TBEL.93Dec19165344@oahu.cern.ch>
Followup-To: comp.unix.large,comp.arch.storage,comp.sys.dec
Sender: ne...@dxcern.cern.ch (USENET News System)
Organization: IBM
References: <CIAG8s.62C@news.iastate.edu>
Date: Sun, 19 Dec 1993 15:53:44 GMT
Lines: 20

At CERN, we're using a set of RS/6000s connected to an IBM 3495 tape
robot to share tapes between the mainframe and workstations around the
site.

I suggest that you talk to your IBM rep. and ask him about the
parallel channel adapter and tape access from an RS/6000.

They work pretty well here but then again I'm biased as I work for
IBM...

Tim.





--
Tim Bell
IBM High Energy Physics European Centre
E-mail: tb...@oahu.cern.ch Office: 513-R002 Phone: x7081

Xref: gmd.de comp.unix.large:619 comp.arch.storage:2163 comp.sys.dec:12490
Newsgroups: comp.unix.large,comp.arch.storage,comp.sys.dec
Path: gmd.de!newsserver.jvnc.net!howland.reston.ans.net!vixen.cso.uiuc.edu!
newsrelay.iastate.edu!news.iastate.edu!john
From: jo...@iastate.edu (John Hascall)
Subject: Re: Big I/O or Kicking the Mainframe out the Door
Message-ID: <CIB7FC.F8B@news.iastate.edu>
Followup-To: poster
Summary: belongs in alt.computer.war-stories or some such I suppose
Sender: ne...@news.iastate.edu (USENET News System)
Organization: Iowa State University, Ames, IA
References: <CIAG8s.62C@news.iastate.edu> <TBEL.93Dec19165344@oahu.cern.ch>
Date: Mon, 20 Dec 1993 01:14:00 GMT
Lines: 24

tb...@oahu.cern.ch (Tim Bell) writes:
}At CERN, we're using a set of RS/6000s connected to an IBM 3495 tape
}robot to share tapes between the mainframe and workstations around the
}site.
}
}I suggest that you talk to your IBM rep. and ask him about the
}parallel channel adapter and tape access from an RS/6000.
}
}They work pretty well here but then again I'm biased as I work for
}IBM...

  I must say it would be mighty ironic if that was how IBM
  finally got back in our machine room after all these years...

  I wasn't around then, but I am told that we installed the
  second ever plug-compatible from Itel (and IBM's reaction
  insured their continued absence).

John
-- 
John Hascall                   ``An ill-chosen word is the fool's messenger.''
Systems Software Engineer
Project Vincent
Iowa State University Computation Center  +  Ames, IA  50011  +  515/294-9551

Xref: gmd.de comp.unix.large:620 comp.arch.storage:2164 comp.sys.dec:12492
Path: gmd.de!xlink.net!howland.reston.ans.net!cs.utexas.edu!uunet!olivea!
pagesat.net!news.cerf.net!lsi.lsil.com!gjb
From: g...@lsil.com (Gary Bridgewater)
Newsgroups: comp.unix.large,comp.arch.storage,comp.sys.dec
Subject: Re: Big I/O or Kicking the Mainframe out the Door
Date: 20 Dec 1993 10:07:25 GMT
Organization: LSI Logic
Lines: 54
Message-ID: <2f3tgt$5vc@lsi.lsil.com>
References: <CIAG8s.62C@news.iastate.edu>
NNTP-Posting-Host: 147.145.48.190

In article <CIAG8...@news.iastate.edu> jo...@iastate.edu (John Hascall) writes:
>We would very dearly like to kick our `eat us out of house and home'
>mainframe out the door.  After our library-automation system's port
>to Unix is finished, our mainframes only real attraction (neglecting
>the `I don't want to learn anything new' hangers-on), is it's I/O
>capabilities.

Wouldn't we all - but these folks dig in.  It's harder than you think to
get these things out the door.  Does the word 'outsource' do anything for you?

>    * Disk systems faster than SCSI (IPI?  Raid?  ???)

Wait a few months.  Fiber channel looks pretty good.

>    * 3480 cartridge tape devices whose speed is
>      similar to the mainframe versions AND which
>      are robust enough for hundreds of tape operations
>      a day (insertions/deinsertions)

This seems to be a problem.  8mm and 4mm don't quite do it.  Engineered to
be cheap.   As mentioned elsewhere, the IBM folks may provide their own rope.
It will be interesting to see if the recent mini-reorg pushes this along or
eliminates it - you can take the man out of Big Iron but can you take Big
Iron out of the man?  I doubt it.

>    * 9-track tapes drives with the similar speed
>      and robustness.

Have you had problems with, for instance, HP SCSI drives?  What for 9-track
tapes?

>    * also information on silo technology (either 3480
>      or other media).   We currently have some 2000
>      3480 cartridges (probably 90+% of the requests
>      could be satisfied with a 500-unit silo though,
>      leaving the rest for the operator).  We also
>      have about 700 4mm DAT tapes we use currently
>      for backing up our Unix system currently.

100 unit 8mm silos are here now - about 1 bay with 4 drives.  That's .5-1TB
depending on compression - with true 10GB drives coming.  I wonder if
fiber channel connectivity is on the drawing board for these?
I assume big DAT silos can't be far away (I don't follow them so they may be
here now).
Also, Storage Tek has Sun hosted connections for their big silos.

Contrary to the opinion expressed in another article,  large Unix sites are
less of a rarity than they were.  Disk is cheap.  Also, automated backup
is the way to go and silos make that happen.  Even small sites are better off
using a sil of some sort since backups are usually the door prize for the
latest new-hire.  A $50k box pays for itself quite rapidly, IMHO.
-- 
Gary Bridgewater (g...@lsil.com)  LSI Logic, Milpitas, CA - speaking only for me
"As God is my witness - I am that fool!"  Gomez Addams

Xref: gmd.de comp.unix.large:643 comp.arch.storage:2236 comp.sys.dec:12590
Newsgroups: comp.unix.large,comp.arch.storage,comp.sys.dec
Path: gmd.de!newsserver.jvnc.net!howland.reston.ans.net!pipex!uunet!
hela.iti.org!lokkur!scs
From: s...@lokkur.dexter.mi.us (Steve Simmons)
Subject: Re: Big I/O or Kicking the Mainframe out the Door
Message-ID: <1993Dec28.145451.9872@lokkur.dexter.mi.us>
Organization: Inland Sea
References: <a2Gr02TY59An01@JUTS.ccc.amdahl.com> 
<2fkrk2$22u@nameserv.sys.hokudai.ac.jp> <1993Dec27.111947.3033@ivax> 
<2fnpol$1a3@lsi.lsil.com>
Date: Tue, 28 Dec 93 14:54:51 GMT
Lines: 24

g...@lsil.com (Gary Bridgewater) writes:

> [[ some fairly intemperate things about big i/o and mainframes ]]

I have yet to see a UNIX box that can really do big i/o with the
mainframes.  Vendors talk a good fight on how many of the SCSI cards
they can put in their boxes, but thus far if you put the cards in and
do the tests, they fall short of the mainframes.  A classic test is to
crank in a single large dataset, do a simple operation on it, and crank
it back out.  Not an unusual operation.  Most UNIX systems immediately
become spindle-bound or single-bus-bound.  Those that permit striping
or something similar to spread that dataset over 10 drives and 10
busses usually bind up their internal bus.

When somebody shows me a real world application of the type above running
on a UNIX mini doing mainframe-level IO, I'll believe it.  Until then,
it's all talk.

On the other hand, if your mainframe is primarily a timesharing machine
with users running interactive stuff then it's ripe for replacement with
a collection of UNIX boxes.
-- 
"God so loved Dexter that he put the University of Michigan somewhere
else."

Xref: gmd.de comp.unix.large:649 comp.arch.storage:2243 comp.sys.dec:12601
Newsgroups: comp.unix.large,comp.arch.storage,comp.sys.dec
Path: gmd.de!newsserver.jvnc.net!howland.reston.ans.net!vixen.cso.uiuc.edu!
newsrelay.iastate.edu!news.iastate.edu!metropolis.gis.iastate.edu!willmore
From: will...@iastate.edu (David Willmore)
Subject: Re: Big I/O or Kicking the Mainframe out the Door
Message-ID: <willmore.757205446@metropolis.gis.iastate.edu>
Sender: ne...@news.iastate.edu (USENET News System)
Organization: Iowa State University, Ames IA
References: <a2Gr02TY59An01@JUTS.ccc.amdahl.com> 
<2fkrk2$22u@nameserv.sys.hokudai.ac.jp> <1993Dec27.111947.3033@ivax> 
<2fnpol$1a3@lsi.lsil.com> <1993Dec28.145451.9872@lokkur.dexter.mi.us>
Date: Wed, 29 Dec 1993 22:50:46 GMT
Lines: 28

s...@lokkur.dexter.mi.us (Steve Simmons) writes:
>g...@lsil.com (Gary Bridgewater) writes:
>> [[ some fairly intemperate things about big i/o and mainframes ]]

>I have yet to see a UNIX box that can really do big i/o with the
>mainframes.  Vendors talk a good fight on how many of the SCSI cards
>they can put in their boxes, but thus far if you put the cards in and
>do the tests, they fall short of the mainframes.  A classic test is to
>crank in a single large dataset, do a simple operation on it, and crank
>it back out.  Not an unusual operation.  Most UNIX systems immediately
>become spindle-bound or single-bus-bound.  Those that permit striping
>or something similar to spread that dataset over 10 drives and 10
>busses usually bind up their internal bus.

>When somebody shows me a real world application of the type above running
>on a UNIX mini doing mainframe-level IO, I'll believe it.  Until then,
>it's all talk.

You mean like the world record for the 1 million record sort benchmark?
Isn't the current record held by an Alpha?  Hmmmm?

Cheers,
David
-- 
___________________________________________________________________________
will...@iastate.edu | "Death before dishonor" | "Better dead than greek" | 
David Willmore  | "Ever noticed how much they look like orchids? Lovely!" | 
---------------------------------------------------------------------------

Xref: gmd.de comp.unix.large:651 comp.arch.storage:2246 comp.sys.dec:12605
Newsgroups: comp.unix.large,comp.arch.storage,comp.sys.dec
Path: gmd.de!newsserver.jvnc.net!howland.reston.ans.net!sol.ctr.columbia.edu!
news.kei.com!ub!acsu.buffalo.edu!kalisiak
From: kali...@cs.buffalo.edu (Chris Kalisiak)
Subject: Re: Big I/O or Kicking the Mainframe out the Door
Message-ID: <CIu01r.7Cr@acsu.buffalo.edu>
Followup-To: comp.unix.large,comp.arch.storage,comp.sys.dec
Sender: nn...@acsu.buffalo.edu
Nntp-Posting-Host: armstrong.cs.buffalo.edu
Organization: UB
X-Newsreader: TIN [version 1.1 PL8]
References: <willmore.757205446@metropolis.gis.iastate.edu>
Date: Thu, 30 Dec 1993 04:49:02 GMT
Lines: 32

David Willmore (will...@iastate.edu) wrote:
>s...@lokkur.dexter.mi.us (Steve Simmons) writes:
>>g...@lsil.com (Gary Bridgewater) writes:
>>> [[ some fairly intemperate things about big i/o and mainframes ]]

>>I have yet to see a UNIX box that can really do big i/o with the
>>mainframes.  Vendors talk a good fight on how many of the SCSI cards
>>they can put in their boxes, but thus far if you put the cards in and
>>do the tests, they fall short of the mainframes.  A classic test is to
>>crank in a single large dataset, do a simple operation on it, and crank
>>it back out.  Not an unusual operation.  Most UNIX systems immediately
>>become spindle-bound or single-bus-bound.  Those that permit striping
>>or something similar to spread that dataset over 10 drives and 10
>>busses usually bind up their internal bus.

>>When somebody shows me a real world application of the type above running
>>on a UNIX mini doing mainframe-level IO, I'll believe it.  Until then,
>>it's all talk.

>You mean like the world record for the 1 million record sort benchmark?
>Isn't the current record held by an Alpha?  Hmmmm?

Yes, the Alpha-based DEC 10000. The DEC 10000 is a mainframe-class machine.
Next question?

Chris

-- 
Chris Kalisiak         |"Pound for pound, lame puns are your best entertainment
kali...@cs.buffalo.edu    | value." -- Gogo Dodo, Tiny Toon Adventures
Tel/Fax:(716)692-5128/695-8481 |"Cocaine is God's way of telling you you have 
I'm a student; I don't speak for UB.| way too much money." -- Sting, The Police

Xref: gmd.de comp.unix.large:653 comp.arch.storage:2247 comp.sys.dec:12606
Newsgroups: comp.unix.large,comp.arch.storage,comp.sys.dec
Path: gmd.de!xlink.net!howland.reston.ans.net!math.ohio-state.edu!
cyber2.cyberstore.ca!nntp.cs.ubc.ca!newsserver.sfu.ca!sfu.ca!vanepp
From: van...@fraser.sfu.ca (Peter Van Epp)
Subject: Re: Big I/O or Kicking the Mainframe out the Door
Message-ID: <vanepp.757227174@sfu.ca>
Sender: ne...@sfu.ca
Organization: Simon Fraser University, Burnaby, B.C., Canada
References: <a2Gr02TY59An01@JUTS.ccc.amdahl.com> 
<2fkrk2$22u@nameserv.sys.hokudai.ac.jp> <1993Dec27.111947.3033@ivax> 
<2fnpol$1a3@lsi.lsil.com> <1993Dec28.145451.9872@lokkur.dexter.mi.us>
Date: Thu, 30 Dec 1993 04:52:54 GMT
Lines: 77

s...@lokkur.dexter.mi.us (Steve Simmons) writes:

>g...@lsil.com (Gary Bridgewater) writes:

>> [[ some fairly intemperate things about big i/o and mainframes ]]

>I have yet to see a UNIX box that can really do big i/o with the
>mainframes.  Vendors talk a good fight on how many of the SCSI cards
>they can put in their boxes, but thus far if you put the cards in and
>do the tests, they fall short of the mainframes.  A classic test is to
>crank in a single large dataset, do a simple operation on it, and crank
>it back out.  Not an unusual operation.  Most UNIX systems immediately
>become spindle-bound or single-bus-bound.  Those that permit striping
>or something similar to spread that dataset over 10 drives and 10
>busses usually bind up their internal bus.

>When somebody shows me a real world application of the type above running
>on a UNIX mini doing mainframe-level IO, I'll believe it.  Until then,
>it's all talk.

>On the other hand, if your mainframe is primarily a timesharing machine
>with users running interactive stuff then it's ripe for replacement with
>a collection of UNIX boxes.
>-- 
>"God so loved Dexter that he put the University of Michigan somewhere
>else."

	In all of this thread (which is very interesting by the way), what
is probably the primary impediment to massive conversions from mainframes
to distributed Unix has not been said: cost. Most mainframes are running
business critical applications (otherwise the cost of the mainframe wouldn't
be justified). Equally, lots of people out in the company probably interact
in some way with at least the data that the mainframe produces. This suggests
that for the term of the switch over, both the mainframe and (at least by
the end of the conversion) a Unix system of equivelent power has to have
been aquired, along with the people to support it (which hopefully is the
same people that are supporting the mainframe). A totally new set of
applications need to be either written or aquired, the people "out there"
using the data have to be trained to use the new system (while still running
the company on the old system). 
	In our case, a University site, running only one mainframe that wasn't
doing the adminstrative processing, this overlap of mainframe and Unix (and
the costs of both systems on the budget for only one of them ...) went on
for most of a year (the mainframe was still there 3 months after the cutover
"just in case" and partly to deal with tapes for things people had forgotten).
	The commercial shop where I came to the university from has 4 
mainframes, all bigger than the university's 1, and had a business requirement 
to be able to print customer statements across the space of a 2 day weekend that
keep 3 Xerox 130 page per minute printers busy for those two days. I can
tell you from experience that Unix boxes will not drive even one (smaller,
only 90 page per minute) of those printers at its rated speed (admittedly
more because of interface problems than raw I/O bandwidth). 
	As the post about the Alpha speed record (some posts down) points out,
there are now some Unix boxes that indeed do have reasonable I/O rates, but
they aren't cheap. The DEC folks I have talked to suggested that
if you want big I/O, you buy the large VAX class, (and price tag) servers not
the deskside models. That is still cheaper than a mainframe though.
	I expect that if not the mainframe folks, then their managers would 
love to dump the mainframe, they just don't yet see an acceptable (from a
finance and risk standpoint) way to do it in many cases. It would be
truly ironic (but not at all impossible), if the mainframes didn't end up
moving to Unix, but rather ended up moving to RISC servers with mainframe
levels of performance but still running MVS and all the current applications
but saving the costs of the mainframe (of course the cost of an MVS licence
would have to take a nosedive). 
	I seem to recall some muttering from somewhere about "Open MVS", and 
the RS6000 is certainly a RISC box that doesn't cost anywhere near what a 
mainframe does. I also saw a reference (probably in this thread) about a 
channel adapter for RS6000's allowing access to at least 3480 and 3490 tape 
drives and possibly IBM disks as well inplying that channel attached 
Xerox printers may be drivable too. If the same old applications are running,
then the retraining costs are avoided as well (just the thought of it makes
me shudder ...)

Peter Van Epp / Operations and Technical Support  
Simon Fraser University, Burnaby, B.C. Canada
#include <std.disclaimer>

Xref: gmd.de comp.unix.large:655 comp.arch.storage:2248 comp.sys.dec:12607
Newsgroups: comp.unix.large,comp.arch.storage,comp.sys.dec
Path: gmd.de!Germany.EU.net!EU.net!howland.reston.ans.net!vixen.cso.uiuc.edu!
newsrelay.iastate.edu!news.iastate.edu!tremplo.gis.iastate.edu!willmore
From: will...@iastate.edu (David Willmore)
Subject: Re: Big I/O or Kicking the Mainframe out the Door
Message-ID: <willmore.757235217@tremplo.gis.iastate.edu>
Sender: ne...@news.iastate.edu (USENET News System)
Organization: Iowa State University, Ames IA
References: <willmore.757205446@metropolis.gis.iastate.edu> 
<CIu01r.7Cr@acsu.buffalo.edu>
Date: Thu, 30 Dec 1993 07:06:57 GMT
Lines: 17

kali...@cs.buffalo.edu (Chris Kalisiak) writes:
>>You mean like the world record for the 1 million record sort benchmark?
>>Isn't the current record held by an Alpha?  Hmmmm?

>Yes, the Alpha-based DEC 10000. The DEC 10000 is a mainframe-class machine.
>Next question?

A mainframe based on a microprocessor?  Hmmm....  I would debate that
the 10000 is a Mainframe class machine.  It's the classic 'killer-micro'.

Cheers,
David
-- 
___________________________________________________________________________
will...@iastate.edu | "Death before dishonor" | "Better dead than greek" | 
David Willmore  | "Ever noticed how much they look like orchids? Lovely!" | 
---------------------------------------------------------------------------

Xref: gmd.de comp.unix.large:656 comp.arch.storage:2249 comp.sys.dec:12609
Newsgroups: comp.unix.large,comp.arch.storage,comp.sys.dec
Path: gmd.de!newsserver.jvnc.net!darwin.sura.net!gatech!news.ans.net!
ngate!serv4n57!clnt1n60.aix.kingston.ibm.com!bksmith
From: bks...@clnt1n60.aix.kingston.ibm.com ()
Subject: Re: Big I/O or Kicking the Mainframe out the Door
Message-ID: <CIuqtp.F0B@serv4n57.aix.kingston.ibm.com>
Sender: Bernard King-Smith
Date: Thu, 30 Dec 1993 14:27:25 GMT
References: <willmore.757205446@metropolis.gis.iastate.edu> 
<CIu01r.7Cr@acsu.buffalo.edu> <willmore.757235217@tremplo.gis.iastate.edu>
Organization: IBM POWERparallel Systems.
Followup-To: comp.unix.large
Lines: 63

In article <willmore....@tremplo.gis.iastate.edu> will...@iastate.edu 
(David Willmore) writes:
>kali...@cs.buffalo.edu (Chris Kalisiak) writes:
>>>You mean like the world record for the 1 million record sort benchmark?
>>>Isn't the current record held by an Alpha?  Hmmmm?
>
>>Yes, the Alpha-based DEC 10000. The DEC 10000 is a mainframe-class machine.
>>Next question?
>
>A mainframe based on a microprocessor?  Hmmm....  I would debate that
>the 10000 is a Mainframe class machine.  It's the classic 'killer-micro'.
>

Since when does the architecture of a CPU define the type of machine 
you have? In any machine, the size of the CPU is immaterial to the
performance of the machine. If there was an implementation of the
ES/9000 CPU built on a single chip, is it a mainframe or a microprocessor?

Looking back over time, the VAX falls under this definition. Recently,
you could get a DEC VAX machine sitting on a desktop running OSF/1. That
to fit your definition of a micro. Years ago when I was in college,
we had a VAX 740 "mainframe". Gee, the processor architecture was the
same, but the I/O was different. 

The underlying message in this thread is that the ability to move
and manage large amounts of data separates the "mainframe" class
machine from the "killer-micro", CPU architecture aside.

Another message is that UNIX in its lineage has carried along with it
a lot of baggage that inhibits efficient I/O since it was originally
written for small machines and little I/O requirements. There are
several UNIX implementations today on mainframe machines that have addressed
these problems, notable AIX/ESA ( which I work(ed) on ), UTS, UNICOS
to name a few. The big difference is that these UNIX implementations
were ported to machine ( not CPU) architectures that had large I/O
capabilities. These systems made changes to try and take advantage of 
this I/O capability in spite of the problems of UNIX.

This is not to say that UNIX is inherently the wrong system, but it
needs fixing for these applications.



>Cheers,
>David
>-- 
>___________________________________________________________________________
>will...@iastate.edu | "Death before dishonor" | "Better dead than greek" | 
>David Willmore  | "Ever noticed how much they look like orchids? Lovely!" | 
>---------------------------------------------------------------------------


-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
Bernie King-Smith                          *  "Lead, follow,
IBM POWERparallel Systems Performance    *    or get out of the way."
bks...@donald.aix.kingston.ibm.com    *     Lee Iacoca,  Chrysler Corp.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*


-- 
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
Bernie King-Smith                          *  "Lead, follow,
IBM POWERparallel Systems Performance     *    or get out of the way."
bks...@mailserv.aix.kingston.ibm.com    *     Lee Iacoca

Xref: gmd.de comp.unix.large:658 comp.arch.storage:2251 comp.sys.dec:12611
Path: gmd.de!newsserver.jvnc.net!darwin.sura.net!spool.mu.edu!
bloom-beacon.mit.edu!crl.dec.com!crl.dec.com!jg
From: j...@crl.dec.com (Jim Gettys)
Newsgroups: comp.unix.large,comp.arch.storage,comp.sys.dec
Subject: Re: Big I/O or Kicking the Mainframe out the Door
Date: 30 Dec 1993 15:20:51 GMT
Organization: DEC Cambridge Research Lab
Lines: 50
Distribution: world
Message-ID: <2furkj$rq7@quabbin.crl.dec.com>
References: <willmore.757205446@metropolis.gis.iastate.edu> <CIu01r.7Cr@acsu.buffalo.edu>
NNTP-Posting-Host: jg.crl.dec.com

In article <CIu01...@acsu.buffalo.edu>, kali...@cs.buffalo.edu (Chris Kalisiak) writes:
> 
> Yes, the Alpha-based DEC 10000. The DEC 10000 is a mainframe-class machine.
> Next question?
> 

Look again a bit more carefully; it used only 16 SCSI disks for the benchmark,
striping the data.  The agregate bandwidth to/from disk is therefore well under what 
even our workstations provide (TURBOchannel has been measured at 93MB/second; even with
lots of bus acquition, the bus can handle the amount of I/O involved to keep those disks running
flat out).   I don't happen to have handy performance data on our dual SCSI controller (note 
that I could have up to 6 in a workstation system if I were running this benchmark.  I don't happen 
to know if the dual controllers are truly independent or not; if they are, then I'm at nearly a controller
and SCSI channel/disk on the workstation (14 SCSI busses and controllers; 2 internal, 12 external via
6 dual controllers, though the external are likely more efficient, if I remember details 
of the system correctly).

I do happen to know that  the Genroco IPI TURBOstor controller has 
been clocked at above 30 MB/second file system performance with just a couple controllers on 
a DEC3000/500, with lots of CPU cycles (and TURBOchannel) left over, so I could add controllers
and up the performance. I don't happen to know what the top end is for the Genroco system.
I believe we ran such a demo at DECUS. I'll try to find out more on how fast we can go
and whether the SCSI controller is truly dual when I get back from New Years next week.

Striping is implemented for both VMS and our OSF/1 UNIX products these days.

So I'd be surprised if a suitably configured Alpha DEC 3000/800 would be all that much
slower for that benchmark.  Of course, for setting records, you run on the fastest machine
you can lay your hands on; and the 10000 was certainly it last spring when the work
was being done (the 3000/800 came out this fall).  A 10000 might still be faster than a 
3000/800, as its second level cache is 4MB rather than the 1MB (if memory serves) 
on the 3000/800.  Sort hits the memory system pretty hard.

Now, if I were running the DEC 10000 as a multiprocessor, rather than the uniprocessor
used for the benchmark, then I could want the I/O performance of the big system, if I need
the agreggate bandwidth or storage capacity of the bigger system.  Some people need more than
a mere 250Gig of disk, after all :-).  And keeping a bunch of Alpha's busy does require
a bit of bandwidth :-); not to mention the fact that 275MHz Alpha processors have been announced and
you can expect systems built on those chips in the not distant future.  And often
number of controllers and disk arms may be more of an issue than "bandwidth".

So there are certainly applications that want the "big iron"; but anything that was on a super
computer 2-4 years ago can certainly run on the small machines of today.  You need new applications
to keep the new "big iron" systems busy.  All the old ones run just fine on the little machines.
(at least our little machines).
				- Jim

-- 
Digital Equipment Corporation
Cambridge Research Laboratory

Xref: gmd.de comp.unix.large:661 comp.arch.storage:2252 comp.sys.dec:12612
Path: gmd.de!xlink.net!howland.reston.ans.net!darwin.sura.net!
blaze.cs.jhu.edu!jhunix.hcf.jhu.edu!jhuvms.hcf.jhu.edu!ecf_stbo
From: ecf_...@jhuvms.hcf.jhu.edu (look out, here he comes again)
Newsgroups: comp.unix.large,comp.arch.storage,comp.sys.dec
Subject: Re: Big I/O or Kicking the Mainframe out the Door
Date: 30 Dec 1993 10:39 EDT
Organization: The Johns Hopkins University - HCF
Lines: 22
Distribution: world
Message-ID: <30DEC199310393729@jhuvms.hcf.jhu.edu>
References: <a2Gr02TY59An01@JUTS.ccc.amdahl.com> 
<2fkrk2$22u@nameserv.sys.hokudai.ac.jp> <1993Dec27.111947.3033@ivax> 
<vanepp.757227174@sfu.ca>
NNTP-Posting-Host: jhuvms.hcf.jhu.edu
News-Software: VAX/VMS VNEWS 1.41    

In article <vanepp.7...@sfu.ca>, van...@fraser.sfu.ca (Peter Van Epp) writes...
>	The commercial shop where I came to the university from has 4 
>mainframes, all bigger than the university's 1, and had a business requirement 
>to be able to print customer statements across the space of a 2 day weekend that
>keep 3 Xerox 130 page per minute printers busy for those two days. I can
>tell you from experience that Unix boxes will not drive even one (smaller,
>only 90 page per minute) of those printers at its rated speed (admittedly
>more because of interface problems than raw I/O bandwidth). 

If you ask me, being able to drive a PRINTER full speed is a good reason to
get an adapter that works, not to keep a mainframe around. 







Tom O'Toole - ecf_...@jhuvms.hcf.jhu.edu - JHUVMS system programmer 
Homewood Computing Facilities
Johns Hopkins University, Balto. Md. 21218 
>What hath god wrought? - Gregory Peccary

Xref: gmd.de comp.unix.large:667 comp.arch.storage:2264 comp.sys.dec:12619
Newsgroups: comp.unix.large,comp.arch.storage,comp.sys.dec
Path: gmd.de!newsserver.jvnc.net!howland.reston.ans.net!pipex!uunet!hela.iti.org!lokkur!scs
From: s...@lokkur.dexter.mi.us (Steve Simmons)
Subject: Re: Big I/O or Kicking the Mainframe out the Door
Message-ID: <1993Dec30.145526.21788@lokkur.dexter.mi.us>
Organization: Inland Sea
References: <a2Gr02TY59An01@JUTS.ccc.amdahl.com> 
<2fkrk2$22u@nameserv.sys.hokudai.ac.jp> <1993Dec27.111947.3033@ivax> 
<2fnpol$1a3@lsi.lsil.com> <1993Dec28.145451.9872@lokkur.dexter.mi.us> 
<willmore.757205446@metropolis.gis.iastate.edu> <2ftiaj$ih6@quabbin.crl.dec.com>
Date: Thu, 30 Dec 93 14:55:26 GMT
Lines: 13

j...@crl.dec.com (Jim Gettys) writes:

>Yes, Alpha broke the sort record by a factor of 6.  Note it uses a single
>processor (our high end system), with 16 SCSI disks.  The somewhat old
>press release is below.  Happens to have been run under VMS however.
>				- Jim

Many thanks for injecting some facts into the fray.  Congrats, DEC.

Now, when will it run under UNIX?  :-)
-- 
"God so loved Dexter that he put the University of Michigan somewhere
else."

Xref: gmd.de comp.unix.large:683 comp.arch.storage:2278 comp.sys.dec:12632
Newsgroups: comp.unix.large,comp.arch.storage,comp.sys.dec
Path: gmd.de!newsserver.jvnc.net!howland.reston.ans.net!europa.eng.gtefsd.com!
uunet!ksmith!keith
From: ke...@ksmith.com (Keith Smith)
Subject: Re: Big I/O or Kicking the Mainframe out the Door
Organization: Keith's Public Access Computer System
Date: Fri, 31 Dec 93 20:08:37 GMT
Message-ID: <1993Dec31.200837.5863@ksmith.com>
References: <a2Gr02TY59An01@JUTS.ccc.amdahl.com> <1993Dec27.111947.3033@ivax> 
<vanepp.757227174@sfu.ca> <30DEC199310393729@jhuvms.hcf.jhu.edu>
Lines: 37

In article <30DEC199...@jhuvms.hcf.jhu.edu>,
look out, here he comes again <ecf_...@jhuvms.hcf.jhu.edu> wrote:
>In article <vanepp.7...@sfu.ca>, van...@fraser.sfu.ca (Peter Van Epp) writes...
>>	The commercial shop where I came to the university from has 4 
>>mainframes, all bigger than the university's 1, and had a business requirement 
>>to be able to print customer statements across the space of a 2 day weekend that
>>keep 3 Xerox 130 page per minute printers busy for those two days. I can
>>tell you from experience that Unix boxes will not drive even one (smaller,
>>only 90 page per minute) of those printers at its rated speed (admittedly
>>more because of interface problems than raw I/O bandwidth). 
>
>If you ask me, being able to drive a PRINTER full speed is a good reason to
>get an adapter that works, not to keep a mainframe around. 

I'm fighting a little of this now, BUT, It seems to me plain old
ethernet will push 800Kbytes/sec.  How much thruput do you need to
maintain 130ppm?  I get about 8K/page on our postscript based
statements, so via ethernet I should be able to push close to 100ppm
over a single ethernet line.  Of course I only have 2 HP's @17ppm.

Generally speaking here though it seems that your main problem is not
the computer but the way you are doing business.

Lesse at net thruput of 100ppm that is 6000 statements/hour (wow) times
24 hours is 144K/day or call it 300K statements in two days.

This is crazy.  Why not bust it up by zipcode and run 20K/day instead?
This would mean you would only neet a 14PPM printer to run the same
volume.  Wanna bet something else?  You can buy 10 17PPM printers
cheaper than 1 130PPM one.

But of course you have to deal with the batch processing mentality,
rather than the continual processing one.
-- 
Keith Smith          ke...@ksmith.com              5719 Archer Rd.
Digital Designs      BBS 1-919-423-4216            Hope Mills, NC 28348-2201
Somewhere in the Styx of North Carolina ...

Xref: gmd.de comp.unix.large:693 comp.arch.storage:2289 comp.sys.dec:12648
Newsgroups: comp.unix.large,comp.arch.storage,comp.sys.dec
Path: gmd.de!xlink.net!howland.reston.ans.net!cs.utexas.edu!swrinde!
menudo.uh.edu!uuneo!sugar!jabberwock!daniels
From: dan...@biles.com (Brad Daniels)
Subject: Re: Big I/O or Kicking the Mainframe out the Door
References: <a2Gr02TY59An01@JUTS.ccc.amdahl.com> <vanepp.757227174@sfu.ca> 
<30DEC199310393729@jhuvms.hcf.jhu.edu> <1993Dec31.200837.5863@ksmith.com>
Organization: Biles and Associates
Date: Mon, 3 Jan 1994 17:33:42 GMT
Message-ID: <CJ2E47.IIM@biles.com>
Lines: 52

In article <1993Dec31....@ksmith.com>,
Keith Smith <ke...@ksmith.com> wrote:
>In article <30DEC199...@jhuvms.hcf.jhu.edu>,
>look out, here he comes again <ecf_...@jhuvms.hcf.jhu.edu> wrote:
>>In article <vanepp.7...@sfu.ca>, van...@fraser.sfu.ca (Peter Van Epp) writes...
>>>	The commercial shop where I came to the university from has 4 
>>>mainframes, all bigger than the university's 1, and had a business requirement 
>>>to be able to print customer statements across the space of a 2 day weekend that
>>>keep 3 Xerox 130 page per minute printers busy for those two days. I can
>>>tell you from experience that Unix boxes will not drive even one (smaller,
>>>only 90 page per minute) of those printers at its rated speed (admittedly
>>>more because of interface problems than raw I/O bandwidth). 
>>
>>If you ask me, being able to drive a PRINTER full speed is a good reason to
>>get an adapter that works, not to keep a mainframe around. 
>
>I'm fighting a little of this now, BUT, It seems to me plain old
>ethernet will push 800Kbytes/sec.  How much thruput do you need to
>maintain 130ppm?  I get about 8K/page on our postscript based
>statements, so via ethernet I should be able to push close to 100ppm
>over a single ethernet line.  Of course I only have 2 HP's @17ppm.

Actually, he's looking at 390 ppm.  It occurs to me, though, that this
application actually lends itself pretty well to distribution.  If you're
talking 17ppm HP printers, 390 ppm =~ 23 printers.  You could definitely
handle it by hardwiring each printer to a minimal workstation, then using
RPC to send across minimal info on each invoice and generating the actual
PostScript one the remote workstations before printing.  If bandwidth becomes
a problem, split it into two or three ethernets run from a single machine.
You could probably get together a complete setup like that for around
$400-$500K including some custom programming (though depending on the
complexity of the application, programming could run several hundred thousand
more), with an annual service bill from $50-$100K.  If you configure the
individual machines right, you can administer everything from the main
machine, resulting in very low system management overhead.  This kind of
distribution is pretty easy to do, and has the advantage of being hugely
scalable, with the only limitation being how fast the "master" machine
can pump out RPC requests.  It's the "sort X million records on disk" or
"coordinate X thousand simultaneous transactions" type stuff that requires
a single machine capable of fast computation and fast I/O.

A distributed approach like the above in effect uses a set of workstations
to make a "virtual mainframe", meaning there wouldn't even be a need to change
business practices substantially.  Most of the change could be hidden inside
the data center.

- Brad
--------------------------------------------------------------------------
+ Brad Daniels                  | Until you can prove unequivocally that +
+ Biles and Associates          | you're arguing epistemology with me,   +
+ These are my views, not B&A's | I won't argue epistemology with you.   +
--------------------------------------------------------------------------

Xref: gmd.de comp.unix.large:697 comp.arch.storage:2296 comp.sys.dec:12657
Newsgroups: comp.unix.large,comp.arch.storage,comp.sys.dec
Path: gmd.de!newsserver.jvnc.net!howland.reston.ans.net!math.ohio-state.edu!
cyber2.cyberstore.ca!nntp.cs.ubc.ca!newsserver.sfu.ca!sfu.ca!vanepp
From: van...@fraser.sfu.ca (Peter Van Epp)
Subject: Re: Big I/O or Kicking the Mainframe out the Door
Message-ID: <vanepp.757650478@sfu.ca>
Sender: ne...@sfu.ca
Organization: Simon Fraser University, Burnaby, B.C., Canada
References: <a2Gr02TY59An01@JUTS.ccc.amdahl.com> <vanepp.757227174@sfu.ca> 
<30DEC199310393729@jhuvms.hcf.jhu.edu> <1993Dec31.200837.5863@ksmith.com> 
<CJ2E47.IIM@biles.com>
Date: Tue, 4 Jan 1994 02:27:58 GMT
Lines: 40

dan...@biles.com (Brad Daniels) writes:

>Actually, he's looking at 390 ppm.  It occurs to me, though, that this
>application actually lends itself pretty well to distribution.  If you're
>talking 17ppm HP printers, 390 ppm =~ 23 printers.  You could definitely
>handle it by hardwiring each printer to a minimal workstation, then using
>RPC to send across minimal info on each invoice and generating the actual
>PostScript one the remote workstations before printing.  If bandwidth becomes
>a problem, split it into two or three ethernets run from a single machine.
>You could probably get together a complete setup like that for around
>$400-$500K including some custom programming (though depending on the
>complexity of the application, programming could run several hundred thousand
>more), with an annual service bill from $50-$100K.  If you configure the
>individual machines right, you can administer everything from the main
>machine, resulting in very low system management overhead.  This kind of
>distribution is pretty easy to do, and has the advantage of being hugely
>scalable, with the only limitation being how fast the "master" machine
>can pump out RPC requests.  It's the "sort X million records on disk" or
>"coordinate X thousand simultaneous transactions" type stuff that requires
>a single machine capable of fast computation and fast I/O.

	We (at the university) esentially did this, printing is handled by
6 HP3si printers spread around campus (and charged for at the printer with
mag cards at $.05 per page). The rub with this for data center operations
such as the one I described using 3 high speed printers is two fold: duty
cycle and cost. The large Xerox has a duty cycle of 1.5 to 2 million pages
per month, a 3si is around 50,000 pages per month as I recall (although you
can do 150,000 or more without problem). I am told (I, thank god, don't do the
budget!) that the Xerox printer is about $.01 per page printed (if the print
volume is high enough) against around $.03 per page on the HPs. At high 
volume, that starts to matter. There are also logistical problems (i.e.
an expensive operator) that has to keep all those paper trays on the HPs
full (there are, I think 2500 sheet trays on the Xerox). When doing these
types of things you need to look at all the costs over the lifetime of 
the service to see if you will really save money. Myself, I favor the 
6 HP3si solution, but my boss who has to pay for it doesn't.

Peter Van Epp / Operations and Technical Support 
Simon Fraser University, Burnaby, B.C. Canada

Xref: gmd.de comp.unix.large:726 comp.arch.storage:2348 comp.sys.dec:12712
Newsgroups: comp.unix.large,comp.arch.storage,comp.sys.dec
Path: gmd.de!xlink.net!howland.reston.ans.net!cs.utexas.edu!uunet!nwnexus!
a2i!dhesi
From: dh...@rahul.net (Rahul Dhesi)
Subject: Re: Big I/O or Kicking the Mainframe out the Door
Message-ID: <CJ7IsH.5E2@rahul.net>
Sender: ne...@rahul.net (Usenet News)
Nntp-Posting-Host: bolero
Organization: a2i network
References: <a2Gr02TY59An01@JUTS.ccc.amdahl.com> <vanepp.757227174@sfu.ca> 
<30DEC199310393729@jhuvms.hcf.jhu.edu> <1993Dec31.200837.5863@ksmith.com> 
<CJ2E47.IIM@biles.com>
Date: Thu, 6 Jan 1994 12:02:40 GMT
Lines: 12

Has anybody tried defining 'mainframe' recently?  It used to be that
the classification of mainframe versus mini versus micro was based
solely on price.

Now it appears that 'mainframe' has come to mean 'a machine with very
high I/O cpaacity'.  If so, this entire discussion is moot, isn't it?
By definition, a mainframe will do better I/O than a non-mainframe.

What's a good definition of 'mainframe'?
-- 
Rahul Dhesi <dh...@rahul.net>
also:  dh...@cirrus.com

Xref: gmd.de comp.unix.large:729 comp.arch.storage:2352 comp.sys.dec:12721
Newsgroups: comp.unix.large,comp.arch.storage,comp.sys.dec
Path: gmd.de!newsserver.jvnc.net!darwin.sura.net!howland.reston.ans.net!
europa.eng.gtefsd.com!news.ans.net!ngate!serv4n57!clnt1n60.aix.kingston.ibm.com!
bksmith
From: bks...@clnt1n60.aix.kingston.ibm.com ()
Subject: Re: Big I/O or Kicking the Mainframe out the Door
Sender: ne...@serv4n57.aix.kingston.ibm.com (KGN AFS Cell News Server)
Message-ID: <CJ82IC.nz7@serv4n57.aix.kingston.ibm.com>
Date: Thu, 6 Jan 1994 19:08:36 GMT
References: <1993Dec31.200837.5863@ksmith.com> <CJ2E47.IIM@biles.com> 
<CJ7IsH.5E2@rahul.net>
Organization: IBM POWERparallel Systems.
Lines: 34

In article <CJ7Is...@rahul.net> dh...@rahul.net (Rahul Dhesi) writes:
>Has anybody tried defining 'mainframe' recently?  It used to be that
>the classification of mainframe versus mini versus micro was based
>solely on price.
>
>Now it appears that 'mainframe' has come to mean 'a machine with very
>high I/O cpaacity'.  If so, this entire discussion is moot, isn't it?
>By definition, a mainframe will do better I/O than a non-mainframe.
>
>What's a good definition of 'mainframe'?

The current crop of machines/manufacturers, that the current PC's and
desktops are trying to replace.  8-)


>-- 
>Rahul Dhesi <dh...@rahul.net>
>also:  dh...@cirrus.com


DISCLAIMER: The opinions expressed here are my own, and are not 
necessarily those of my employer.

-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
Bernie King-Smith                          *  "Lead, follow,
IBM POWERparallel Development Services   *    or get out of the way."
bks...@donald.aix.kingston.ibm.com    *     Lee Iacoca, Chrysler Corp.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*

-- 
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
Bernie King-Smith                          *  "Lead, follow,
IBM POWERparallel Systems Performance     *    or get out of the way."
bks...@mailserv.aix.kingston.ibm.com    *     Lee Iacoca

			   USENET Archives


Notice
******

The materials and information included in this website may only be used
for purposes such as criticism, review, private study, scholarship, or 
research.


Electronic mail:			      WorldWideWeb:
   tech-insider@outlook.com		         http://tech-insider.org/