Path: sparky!uunet!hela.iti.org!nigel.msen.com!emv
From: e...@msen.com (Edward Vielmetti)
Newsgroups: mi.misc
Subject: LISA VI paper availible via anon ftp
Date: 9 Nov 1992 23:22:39 GMT
Organization: Msen, Inc. -- Ann Arbor, Michigan
Lines: 26
Message-ID: <1dmrsdINNcs4@nigel.msen.com>
NNTP-Posting-Host: garnet.msen.com
X-Newsreader: TIN [version 1.1 PL6]

The folks at Simon Fraser University have managed to 
get themselves out of using MTS and onto other computing
systems (Unix and VMS) without an undue amount of grief.
I know that there's a lot of angst building up at the
U of Michigan about the same sort of transition, and in
the absence of strong central planning here's at least some
hint of what some other organization facing the same
move has gone through.


  Edward Vielmetti, vice president for research, Msen Inc. e...@Msen.com
        Msen Inc., 628 Brooks, Ann Arbor MI  48103 +1 313 998 GLOB

[ Article crossposted from comp.unix.large,comp.org.usenix ]
[ Author was van...@fraser.sfu.ca ]
[ Posted on Mon, 9 Nov 1992 20:36:30 GMT ]

I have just made a copy of our LISA VI paper
"Dropping The Mainframe Without Crushing The Users: Mainframe to Distributed
UNIX in Nine Months"
availible for anon ftp from ftpserver.sfu.ca in directory
/pub/ucspapers/LISA-VI.paper.ps.Z 
(which as the file name implies is in PostScript).

Peter Van Epp / Operations and Technical Support 
Simon Fraser University, Burnaby, B.C. Canada

Xref: sparky comp.unix.large:363 mi.misc:726
Newsgroups: comp.unix.large,mi.misc
Path: sparky!uunet!hela.iti.org!hela.iti.org!scs
From: s...@iti.org (Steve Simmons)
Subject: Re: LISA VI paper availible via anon ftp
Message-ID: <scs.721401144@hela.iti.org>
Sender: use...@iti.org (Hela USENET News System)
Nntp-Posting-Host: hela.iti.org
Organization: Industrial Technology Institute
References: <1dmrsdINNcs4@nigel.msen.com>
Date: Tue, 10 Nov 1992 13:12:24 GMT
Lines: 75

[ I've taken the liberty of re-crossposting this to comp.unix.large,
  where Peter hangs out.  Peter, feel free to correct me or add
  your own comments.  mi.misc is the Michigan general discussion
  group.  Neither Ed nor Peter says this explicitly -- the mainframe
  they dropped was running MTS in a usage model very much like the
  University of Michigan.  --scs ]

e...@msen.com (Edward Vielmetti) writes:

>The folks at Simon Fraser University have managed to 
>get themselves out of using MTS and onto other computing
>systems (Unix and VMS) without an undue amount of grief.
>I know that there's a lot of angst building up at the
>U of Michigan about the same sort of transition, and in
>the absence of strong central planning here's at least some
>hint of what some other organization facing the same
>move has gone through.

>[ Article crossposted from comp.unix.large,comp.org.usenix ]
>[ Author was van...@fraser.sfu.ca ]
>[ Peter Van Epp / Operations and Technical Support ]
>[ Posted on Mon, 9 Nov 1992 20:36:30 GMT ]

>I have just made a copy of our LISA VI paper "Dropping The Mainframe
>Without Crushing The Users: Mainframe to Distributed UNIX in Nine Months"
>availible for anon ftp from ftpserver.sfu.ca in directory
>/pub/ucspapers/LISA-VI.paper.ps.Z 
>(which as the file name implies is in PostScript).

Peter's paper and talk were quite interesting.  He did stress a few
points in the talk that are worth repeating.

There was not an overall reduction in cost when you count many of the
conversion costs.  It was not clear to me how much was conversion (new
network stuff, etc) and how much was CPU and disk.  IMHO, the cost
benefit will probably improve over time as distributed UNIX boxes
continue to decline in cost.  For a site like University of Michigan,
much of that conversion wouldn't be needed -- they already have the
network, AFS, etc.

There was *very* strong central control.  Last year at LISA he
mentioned the project had just been started, and they had some doubts
about their ability to execute.  Fortunately for him, he was wrong.
The central control got behind the project and pushed with resources
and political support to get it done.

Many MTS utilities were not ported to the UNIX environment.  I asked
specificly about MICRO and we talked briefly about a few others.  They
seem to have adopted a policy that if there was already a UNIX
equivelent, conversion would have to be done.

Some users just couldn't get it.  They had people claiming not to know
the change was coming even after the mainframe was turned off.

Tapes were a big issue.  They found the unix tape facilities woefully
underpowered.  Lack of ability to read IBM formatted tapes was a big
lose (they've got years and years of backup and archives), and tape
thruput speed left a lot to be desired.

Mainframe I/O speed was a big issue.  They had a small number of people
who put huge (by mainframe standards!) datasets onto 250
inch-per-second tape drives and did data processing.  There is no UNIX
box on their site which can do this; such processing is now farmed
out.

One thing that came out stronger in his talk was "the mainframe
attitude".  I don't know how to put it any better than that, and it's
meant as a compliment.  They did serious work evaluating, benchmarking,
and testing performance of the various pieces they installed, much more
thorough than I see at most installations.  Then they tested afterwards
to see what they'd actually done.  Very refreshing, almost inspiring.
-- 
scs:  Currently tied for 2536th on the Zwicky list with `zwicky'.  The
      list was compiled by Elizabeth Zwicky (not Fritz, not Arnold),
      zwi...@erg.sri.com.

Xref: sparky comp.unix.large:364 mi.misc:727
Newsgroups: comp.unix.large,mi.misc
Path: sparky!uunet!destroyer!cs.ubc.ca!newsserver.sfu.ca!sfu.ca!vanepp
From: van...@fraser.sfu.ca (Peter Van Epp)
Subject: Re: LISA VI paper availible via anon ftp
Message-ID: <vanepp.721411740@sfu.ca>
Sender: ne...@sfu.ca
Organization: Simon Fraser University, Burnaby, B.C., Canada
References: <1dmrsdINNcs4@nigel.msen.com> <scs.721401144@hela.iti.org>
Date: Tue, 10 Nov 1992 16:09:00 GMT
Lines: 198

s...@iti.org (Steve Simmons) writes:

>[ I've taken the liberty of re-crossposting this to comp.unix.large,
>  where Peter hangs out.  Peter, feel free to correct me or add
>  your own comments.  mi.misc is the Michigan general discussion
>  group.  Neither Ed nor Peter says this explicitly -- the mainframe
>  they dropped was running MTS in a usage model very much like the
>  University of Michigan.  --scs ]

>e...@msen.com (Edward Vielmetti) writes:

>>The folks at Simon Fraser University have managed to 
>>get themselves out of using MTS and onto other computing
>>systems (Unix and VMS) without an undue amount of grief.
>>I know that there's a lot of angst building up at the
>>U of Michigan about the same sort of transition, and in
>>the absence of strong central planning here's at least some
>>hint of what some other organization facing the same
>>move has gone through.

	I would note that this summer both Durham and Newcastle (both
also MTS sites) have converted to Unix (I assume successfully!).
The same ftp site has a paper from the MTS community workshop last year
at RPI that covers more of the MTS specifics of the converstion (and an
FS tape utility) as well. The workshop this year is at SFU, so the UM
folks can come over and see how we did (and how we didn't :-) ) as well.
	Remember that all the MTS sites (SFU probably less than most of 
the others due to people leaving) have a very important resource for 
conversions like this: very skilled people. The skills needed to architect
and write and maintain a mainframe operating system (i.e. MTS) transfer
easily to the Unix environment. The major problem that we found (and are
still finding) is a lack of understanding of performance issues at high
loads (but to be fair, for political reasons "mainframe like" Unix boxes
were a no no here) among at least the Unix vendors that we talked to. UM
also has a signifigant amount of Unix expertese (Peter Honeyman comes to
mind as the most obvious example, but there are lots of other skilled folks
that have been at various MTS workshops as well). In return for that though,
UM is a lot bigger than SFU, and some of the issues that we managed to slid
through (a common uid space for the whole campus for instance) will be 
more difficult at UM, a lot more demand (at least I expect) for things like
3480 and 3420 tape support.
	I'll note here that things seem to be looking up on the Unix front
(and if you have a big sequent or HP, maybe they have always looked up :-) ),
machines with reasonable sounding I/O are coming. I borrowed a SCSI 3480
tape drive that I know is capable of streaming (or big VAX has two and it
can make them stream!), and couldn't get any Unix box we have to make it
stream. The drive data starves (just like when you give it a small block size
from MTS and the buffer empties and it loses stream), and as a result it was
slower than a single density 8mm drive (instead of the 1 megabyte/ sec that
it is capable of). It sounds like some of the new Unix boxes should have 
decent I/O rates, and make at least 3480 type tapes accessable (not cheap,
but at least accessable).

>>[ Article crossposted from comp.unix.large,comp.org.usenix ]
>>[ Author was van...@fraser.sfu.ca ]
>>[ Peter Van Epp / Operations and Technical Support ]
>>[ Posted on Mon, 9 Nov 1992 20:36:30 GMT ]

>>I have just made a copy of our LISA VI paper "Dropping The Mainframe
>>Without Crushing The Users: Mainframe to Distributed UNIX in Nine Months"
>>availible for anon ftp from ftpserver.sfu.ca in directory
>>/pub/ucspapers/LISA-VI.paper.ps.Z 
>>(which as the file name implies is in PostScript).

>Peter's paper and talk were quite interesting.  He did stress a few
>points in the talk that are worth repeating.

>There was not an overall reduction in cost when you count many of the
>conversion costs.  It was not clear to me how much was conversion (new
>network stuff, etc) and how much was CPU and disk.  IMHO, the cost
>benefit will probably improve over time as distributed UNIX boxes
>continue to decline in cost.  For a site like University of Michigan,
>much of that conversion wouldn't be needed -- they already have the
>network, AFS, etc.

I should point out that the lack of overall cost reduction is my opinion,
and includes the cost of installing the fibre based network over the last
5 or 6 years. Around here, that is not taken into account, and the Unix
conversion is considered "much cheaper" than the mainframe solution. I
will point out that this only considers the machines in the central 
machine room, not the various labs of PCs that were installed to take some
of the load off of the mainframe over the years (and at several hundred
thousand each, are a signifigant chunk of money). We are just installing a
public lab of 40 NeXT stations that also don't get counted in to the costs
(at least so is my impression!). I include all those costs when saying that
the Unix conversion was more expensive (note that I believe it was worth it,
just that "cost reduction" probably isn't an issue if we are being honest).
	I completely agree that Unix boxes are cheap and getting cheaper, the
problem (here and elsewhere, listening to the other LISA participents) is 
not hardware, but people. In my opinion, there has been a net reduction in
service to the community as a result of this conversion. We have gone from
maintaining "one" (there are actually three of them, 2 of which are still here
for the admin side) machine to maintianing the 16 Unix boxes that have replaced
it (to say nothing of the new NeXT lab) of 4 different architectures on the 
same number of people. That has been lost (again in my opinion) is the level
of service to the community. Several people that used to be user consultants
have been conscripted to be Unix support people, simply to keep up with the
work, and provate workstations on campus have been deserted due to lack of
staff time. There is certainly demand out in the community for many more 
people to support distributed Unix on the desktop, the only thing lacking
is money to pay for them (and I expect the salaries will make IBM maintance
look cheap!).

>There was *very* strong central control.  Last year at LISA he
>mentioned the project had just been started, and they had some doubts
>about their ability to execute.  Fortunately for him, he was wrong.
>The central control got behind the project and pushed with resources
>and political support to get it done.

The central control is a series of committees of academics that set computing
policy, and (I will admit that this part suprised me, and I think all of us)
they have done an excellant job. I believe they also got a very eye opening 
education in the costs and economics of computing along the way.
	The resources used are (and as far as I am aware only!) the operating
money that was funding the mainframe (which is part of the reason for the 
speed of the conversion, since while the mainframe was still here, the same 
money was being spent twice). 
	There is no question that from the political side, there was strong
(and politically irresistable) support, since the push was coming from 
members of the academic community, not the computing center! This was a major
win for the committee structure (that and the seriousness that the committees
brought to the task of educating themselves on computing, since many of them
are not proffessional computing people, nor even Computing Science people).                           

>Many MTS utilities were not ported to the UNIX environment.  I asked
>specificly about MICRO and we talked briefly about a few others.  They
>seem to have adopted a policy that if there was already a UNIX
>equivelent, conversion would have to be done.

Other than an FS tape conversion utility, no MTS specific utilities were 
ported. Spires applications went to a package called BRS, and everything
else was either dropped or converted to an off the shelf Unix solution.
A small number (less than 10) of people continued their work (if it was
going to end soon) on the MTS system at UBC. This turned out to be a much
smaller number than we expected.

>Some users just couldn't get it.  They had people claiming not to know
>the change was coming even after the mainframe was turned off.

They knew, they just didn't believe it would happen, with good reason. MTS
came to SFU to replace OS/MVT, MTS is gone, OS/MVT is now scheduled to be
shut down next April, given that would you have believed that MTS would
actually be gone in the 9 months that management claimed? We suprised a 
lot of people (including ourselves!).

>Tapes were a big issue.  They found the unix tape facilities woefully
>underpowered.  Lack of ability to read IBM formatted tapes was a big
>lose (they've got years and years of backup and archives), and tape
>thruput speed left a lot to be desired.

>Mainframe I/O speed was a big issue.  They had a small number of people
>who put huge (by mainframe standards!) datasets onto 250
>inch-per-second tape drives and did data processing.  There is no UNIX
>box on their site which can do this; such processing is now farmed
>out.

Not even that good, in some cases the data sets spin on disk, in some cases
I don't know what the people do. We have an optical juke box that is supposed
to help with this (20 gigs of space) but it isn't online yet. Even when it 
is, it is not going to give the same I/O performance as an IBM tape connected
to an IBM channel. The common method of work on MTS was mount three or four
tapes, and use the tape as the input to the processing going on in the CPU,
I expect that the optical juke performance isn't going to replace that function.
It looks more like, pull some of your data down to temp disk, process it, and
then put that part back to the juke and do another piece (but we don't know
yet).

>One thing that came out stronger in his talk was "the mainframe
>attitude".  I don't know how to put it any better than that, and it's
>meant as a compliment.  They did serious work evaluating, benchmarking,
>and testing performance of the various pieces they installed, much more
>thorough than I see at most installations.  Then they tested afterwards
>to see what they'd actually done.  Very refreshing, almost inspiring.
>-- 

Also very hard and not very successful, since the basic performance measurement
tools that we take for granted on mainframes seem to be totally lacking on
Unix boxes (or at least the ones that we have), or we haven't found the Unix
equivelent yet (which is also possible).
	The general (and possibly wrong!) impression I get of typical Unix
shops is that they don't tend to be of the "bet the business" type of 
seriousness that a large commercial DP shop is. I came here from an airline
where the reservations system was considered so vital, that there was a spare
$4 million mainframe (and 2 of everything else involved as well) just in case
something broke, so I may be a bit biased shall we say :-).
	A good case in point, is backup tapes, I see many sites using Sony
video tape, because it is $10 instead of the $20 a certified data tape is,
but you may be risking $100,000 of data (on your backup tapes) to save $10.
How many other Unix sites duplicate their weekly full backup tapes and move
them offsite in case of disaster (or tape breakage for that matter?), both
standard mainframe/commercial shop practices that are being done on Unix here.
	I was talking to someone I know at the local telephone company, whose
background is the same as mine, and we were shaking our heads over the 
practices that he found on Unix boxes that were helping to run the telephone
network, so I don't think the impression is necessarily wrong.

Peter Van Epp / Operations and Technical Support 
Simon Fraser University, Burnaby, B.C. Canada

Xref: sparky comp.unix.large:370 mi.misc:729
Path: sparky!uunet!gumby!destroyer!news.itd.umich.edu!pisa.citi.umich.edu!rees
From: re...@pisa.citi.umich.edu (Jim Rees)
Newsgroups: comp.unix.large,mi.misc
Subject: Re: LISA VI paper availible via anon ftp
Date: 11 Nov 1992 19:29:15 GMT
Organization: University of Michigan CITI
Lines: 11
Distribution: world
Message-ID: <5c4efaf4.1bc5b@pisa.citi.umich.edu>
References: <1dmrsdINNcs4@nigel.msen.com> <scs.721401144@hela.iti.org> 
<vanepp.721411740@sfu.ca>
Reply-To: Jim....@umich.edu
NNTP-Posting-Host: pisa.citi.umich.edu

In article <vanepp.7...@sfu.ca>, van...@fraser.sfu.ca (Peter Van Epp) writes:

  ...
  UM is a lot bigger than SFU, and some of the issues that we managed to slid
  through (a common uid space for the whole campus for instance) will be 
  more difficult at UM...

Actually, that's one of the only parts that has been done here.  We've got a
pretty nifty distributed system for assigning uids called uniqname.  I think
there's a paper on it somewhere (Usenix?).  Some large fraction of the
University community now has a unique Unix uid.

Xref: sparky comp.unix.large:375 mi.misc:734
Newsgroups: comp.unix.large,mi.misc
Path: sparky!uunet!destroyer!cs.ubc.ca!newsserver.sfu.ca!sfu.ca!vanepp
From: van...@fraser.sfu.ca (Peter Van Epp)
Subject: Re: LISA VI paper availible via anon ftp
Message-ID: <vanepp.721543207@sfu.ca>
Sender: ne...@sfu.ca
Organization: Simon Fraser University, Burnaby, B.C., Canada
References: <1dmrsdINNcs4@nigel.msen.com> <scs.721401144@hela.iti.org> 
<vanepp.721411740@sfu.ca> <5c4efaf4.1bc5b@pisa.citi.umich.edu>
Date: Thu, 12 Nov 1992 04:40:07 GMT
Lines: 86

	I suppose I should ask if the folks in comp.unix.large would like us
to move this discussion out of here? While we are certainly talking about large
unix systems, it may not be of interest to more than me and the Michigan folks,
and I expect I could talk the UM folks out of a Unix guest account and carry 
on from there.

re...@pisa.citi.umich.edu (Jim Rees) writes:

>In article <vanepp.7...@sfu.ca>, van...@fraser.sfu.ca (Peter Van Epp) writes:

>  ...
>  UM is a lot bigger than SFU, and some of the issues that we managed to slid
>  through (a common uid space for the whole campus for instance) will be 
>  more difficult at UM...

>Actually, that's one of the only parts that has been done here.  We've got a
>pretty nifty distributed system for assigning uids called uniqname.  I think
>there's a paper on it somewhere (Usenix?).  Some large fraction of the
>University community now has a unique Unix uid.

If I had thought of it, I knew that :-), I was expecting that you might be 
big enough to run out of 65000+ uids. It seems to me that the IFS project
is a fair good step along the path to MTS migration too (although I haven't
heard how it is going lately). We are small enough that we could make do with
a single NFS fileserver to give us central file services (like backup and
restore, although the restore side is a lot less convienient than the MTS
version). There are a number of people (smart ones in my opinion!) that 
want nothing to do with their own Unix box if it means (as it currently
does at SFU) that they have to do their own system administration and 
system backup. They figure that it is better for the university to pay us to 
look after all of that for them. There are also people that are choosing to
do it all for themselves using such central services as they like such
as mail delivery, NetNews services and PD software for the machine types
that we support that we export to the world (readonly) from the file server.
	The major advantage of the setup as it stands now, is that both
services are availible for those that want them (unlike MTS where it was 
all central). The next challange is going to be whether we can provide some
kind of a "consulting" system administration service to make it easier for
people to run machines on their desks. I expect that for that to happen
we will need either AFS or DFS for both security and local caching of files
from a central file server (like IFS) so that dataless machines on peoples
desktops can be supported. If we can't work that out, then there is going
to be a constant war for funding between the people that want central
support (and the departments that are too small to be able to do it for
themselves) and the people that want the money from the computing center
budget to do it themselves (I expect you folks at UM are familiar with this
argument!).
	I suspect that most of the rest of the MTS community will follow us
down the road, after all, many of the ideas that we used where generated
within the MTS community at the various MTS workshops over the last 7 or 8
years while we thought and talked about what should replace MTS. We just got 
the conversion speeded up quite a bit by the computing center being moved 
under a new Vice President (from the VP Research and Development to the 
Associate VP Academic).  The new VP thought that it could be done (and by 
implication, that the people working for him could do it!), and succeeded 
in getting the academic community to buy in by giving the control of the 
project to the academic committees.
	These committees consulted with their peers about what the community
requrements were and then balanced that against the amount of money that they 
had to work with (i.e. the maintance budget for the mainframe). 
	Meanwhile the computing center staff identified what and 
how much was currently being done on the mainframe, and then presented that
data to the committee to decide what should be done about it (where possible
we identified what could be done about it and how much it would cost as well). 
Which in some cases turned out to be that the service would no longer be
offered, either because there was no Unix alternative, or it was too expensive.
	The fact that it was a committee of their peers making these decisions
rather than the computing center was probably the single thing that smoothed
this transition the most. If the computing center had done this, we would of
course been wrong, but as the committees decided it and the faculty had input
into the committee, it is harder to complain (not impossible of course, but
harder)
	I will admit that when this plan was proposed (dump MTS within the 
next nine months, under the direction of a series of academic committees),
I thought it was probably resume time, time to once again watch the fireworks
from afar ... Luckily I held off doing anything much about it (other than
updating my resume of course), since while it was a tremendous amount of
work, it is also very satisfiying to see the results of that work. The
committees were good enough to consult with the computing center for advise
about what to buy (although the final decsions were of course theirs), and
how we thought things should be done. I hope and believe that they are 
reasonably happy (as is the community) with what resulted. The conversion 
went far more smoothly than we had any right to expect.

Peter Van Epp / Operations and Technical Support 
Simon Fraser University, Burnaby, B.C. Canada

Xref: sparky comp.unix.large:377 mi.misc:737
Path: sparky!uunet!gumby!destroyer!news.itd.umich.edu!pisa.citi.umich.edu!rees
From: re...@pisa.citi.umich.edu (Jim Rees)
Newsgroups: comp.unix.large,mi.misc
Subject: Re: LISA VI paper availible via anon ftp
Date: 12 Nov 1992 18:33:22 GMT
Organization: University of Michigan CITI
Lines: 17
Distribution: world
Message-ID: <5c53cdbd.1bc5b@pisa.citi.umich.edu>
References: <1dmrsdINNcs4@nigel.msen.com> <scs.721401144@hela.iti.org> 
<vanepp.721411740@sfu.ca> <5c4efaf4.1bc5b@pisa.citi.umich.edu> 
<vanepp.721543207@sfu.ca>
Reply-To: Jim....@umich.edu
NNTP-Posting-Host: pisa.citi.umich.edu

In article <vanepp.7...@sfu.ca>, van...@fraser.sfu.ca (Peter Van Epp) writes:

  If I had thought of it, I knew that :-), I was expecting that you might be 
  big enough to run out of 65000+ uids.

It's worse than that.  There are still some Unix systems out there that
store the uid in a signed short, which means you've only got 32,000 of them
available.  That's roughly the size of our user community, and I think we
have about 28,000 assigned now, even without IFS extensively deployed.  That
doesn't leave much breathing room.

  We are small enough that we could make do with
  a single NFS fileserver to give us central file services...

No way will that work with 30,000 users.  Client caching is absolutely
essential, and it's also nice to have some sane kind of sharing semantics
(unlike what NFS gives you).

Xref: sparky comp.unix.large:380 mi.misc:744
Path: sparky!uunet!know!hri.com!noc.near.net!news.Brown.EDU!qt.cs.utexas.edu!
yale.edu!jvnc.net!darwin.sura.net!zaphod.mps.ohio-state.edu!caen!
destroyer!cs.ubc.ca!newsserver.sfu.ca!sfu.ca!vanepp
From: van...@fraser.sfu.ca (Peter Van Epp)
Newsgroups: comp.unix.large,mi.misc
Subject: Re: LISA VI paper availible via anon ftp
Message-ID: <vanepp.721671320@sfu.ca>
Date: 13 Nov 92 16:15:20 GMT
References: <1dmrsdINNcs4@nigel.msen.com> <scs.721401144@hela.iti.org> 
<vanepp.721411740@sfu.ca> <5c4efaf4.1bc5b@pisa.citi.umich.edu> 
<vanepp.721543207@sfu.ca> <5c53cdbd.1bc5b@pisa.citi.umich.edu>
Sender: ne...@sfu.ca
Organization: Simon Fraser University, Burnaby, B.C., Canada
Lines: 74

re...@pisa.citi.umich.edu (Jim Rees) writes:

>In article <vanepp.7...@sfu.ca>, van...@fraser.sfu.ca (Peter Van Epp) writes:

>  If I had thought of it, I knew that :-), I was expecting that you might be 
>  big enough to run out of 65000+ uids.

>It's worse than that.  There are still some Unix systems out there that
>store the uid in a signed short, which means you've only got 32,000 of them
>available.  That's roughly the size of our user community, and I think we
>have about 28,000 assigned now, even without IFS extensively deployed.  That
>doesn't leave much breathing room.

I know, we have some uids above 32k and we see some interesting problems
at times. Several of our part time operators couldn't do sudo on some
machines before we diddled sudo to use an unsigned int (which I'm suprised
worked, since the system still thinks its a signed int!), maybe we were
just lucky :-)


>  We are small enough that we could make do with
>  a single NFS fileserver to give us central file services...

>No way will that work with 30,000 users.  Client caching is absolutely
>essential, and it's also nice to have some sane kind of sharing semantics
>(unlike what NFS gives you).

While I completely agree that some form of AFS like system is the final
answer, at the time we did this (and even now for that matter) most of the
people here are not Unix experts (including me!), we had only one person that
was an experienced Unix sys admin, all the rest of us are MTS retreads (not
a bad point to be starting out from I will admit :-) ). At that time (and
probably even now) Transarc didn't support the Silicon Graphics machines,
and we already had some. 
	Since we run an Auspex file server with 6 (of the possible 8) Ethernet
ports and all the NFS service is in the machine room on secure Ethernets, we
in fact manage to support some 11,00 active users (of a total of > 20,000)
home directories from that single NFS server. We selected NFS because all
the machines we had support it (and at that point we didn't know of all the
security problems present in both NFS and Unix!). This entire phase of the
conversion was designed with the thought that it was a 2 year solution (whether
that ends up being true remains to be seen :-) ), and hoepfully when (and if!)
the time comes to move on, AFS, DFS or IFS will be a more mainstream solution
or equally possibly, we will have enough confidence in our Unix expertese to
be able to do the work required to install it for ourselves.
	The security problems we are seeing and the lack of security in NFS
(at least NFS that will work with all vendors) have caused us to restrict
access to the file server to our machines, in our machine room, where we had
hoped to be able to provide it to the desk top. 
	We are currently looking at the Novell Netware product on a Sun to 
see if we can use that to export NFS mounted home directories to Macs and PCs
in a semi secure manner (i.e. the NFS side will be controlled on a secure
ethernet on the Sun not exposing the NFS mount point to the backbone Ethernet).
	In general all parts of this conversion were probably over specified
and in most cases, selected with the capability to increase capacity by just
adding money if we found that we had under estimated the load. Both the Auspex
and the SGI machines are upgradeable to more capacity, the Auspex by adding 
more disks (which we have done) and more ethernets (which we haven't so far),
and the SGI machines by just adding more CPU boards and booting the machine.
I will note that after I commented in the the selection meeting that the
single CPU model was probably alright, after all we could just give SGI more
money and buy a another couple of cpus when it fell over dead (about 15
minutes after we turned it on), which caused the bosses to buy the 2 cpu
model right off... We in fact haven't had to upgrade yet (although with
20/20 hindsight I expect we might have had to with the single CPU model).
Should we have to, we can plug in another 2 cpu board and boot, and away
we will go, no fuss no muss, we have done it before on our previous SGI 
systems.
	I expect that this could have been done more cheaply, but no matter
what happened, we would (and still could) find uses for all the machines that
we have if we had indeed made a major under estimate of our requirements.

Peter Van Epp / Operations and Technical Support 
Simon Fraser University, Burnaby, B.C. Canada