Path: nntp.gmd.de!dearn!esoc!linuxed1!peernews.demon.co.uk!
doc.news.pipex.net!
 pipex!news.oleane.net!oleane!jussieu.fr!univ-lyon1.fr!zaphod.crihan.fr!
news.univ-rennes1.fr!irisa.fr!news2.EUnet.fr!EU.net!sun4nl!news.nic.surfnet.nl!
tudelft.nl!et.tudelft.nl!mnijweide
From: mnijwe...@et.tudelft.nl
Newsgroups: comp.os.linux.development.apps
Subject: Linux is 'creating' memory ?!
Message-ID: <1995Feb7.172606.5784@tudedv.et.tudelft.nl>
Date: 7 Feb 95 17:26:06 +0100
Organization: TU-Delft, dpt of Electrical Engineering
Lines: 48

Linux & the memory.

I'm running Linux 1.1.88 on a 386DX-40 with 4Mb RAM, 17Mb swap partition
My compiler is GCC 2.5.8

As I was writing my program, I noticed an oddity (=bug?).
It's probably best explained by a simple program:

#include <stdlib.h>
int main(void) {
   int i,*p;
   /* 1st stage */
   for(i=0;i<10000;i++) {
      p[i]=malloc(4096)
      if (p[i]==NULL) {
         fprintf(stderr,"Out of memory\n");
         exit(1);
      }
   }
   /* 2nd stage */
   for(i=0;i<10000;i++)
      *(p[i])=1;
}

As you can see the first stage tries to allocate 40Mb of memory. Since
I don't have that kind of memory it should fail ofcourse. To my
surprise it didn't. (!)
Well then, the second stage tries to access the 40Mb. At this point
Linux figures out that that kind of memory isn't there, so it kind of
hangs. Not really it just becomes increadably slow, I was able to exit
the program with CTRL-C but it did take a few minutes to do that.

BTW, this doesn't happen if I use calloc() instead of malloc(), but malloc
is faster that calloc, so I prefer to malloc.

Am I doing something wrong ? Or is it a bug in Linux or GCC ?


Marc.


+-------------------------------------------------------------------+
| Marc Nijweide         Delft University of Technology, Netherlands |
| M.Nijwe...@et.TUDelft.nl  http://morra.et.tudelft.nl:80/~nijweide |
+-------------------------------------------------------------------+

 If builders build things the way programmers write programs, the
 first woodpecker that came along, would destroy civilisation.

Path: nntp.gmd.de!dearn!esoc!linuxed1!peernews.demon.co.uk!doc.news.pipex.net!
 pipex!news.oleane.net!oleane!jussieu.fr!u-psud.fr!zaphod.crihan.fr!
news.univ-rennes1.fr!irisa.fr!news2.EUnet.fr!EU.net!sun4nl!news.nic.surfnet.nl!
tudelft.nl!et.tudelft.nl!iafilius
From: iafil...@et.tudelft.nl
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Message-ID: <1995Feb7.215928.5786@tudedv.et.tudelft.nl>
Date: 7 Feb 95 21:59:28 +0100
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl>
Organization: TU-Delft, dpt of Electrical Engineering
Lines: 57

In article <1995Feb7.172606.5...@tudedv.et.tudelft.nl>, mnijwe...@et.tudelft.nl 
writes:
> Linux & the memory.
>
> I'm running Linux 1.1.88 on a 386DX-40 with 4Mb RAM, 17Mb swap partition
> My compiler is GCC 2.5.8
>
> As I was writing my program, I noticed an oddity (=bug?).
> It's probably best explained by a simple program:
>
> #include <stdlib.h>
> int main(void) {
>    int i,*p;

 Has to be "int i, *p[10000];

>    /* 1st stage */
>    for(i=0;i<10000;i++) {
>       p[i]=malloc(4096)
>       if (p[i]==NULL) {
>          fprintf(stderr,"Out of memory\n");
>          exit(1);
>       }
>    }
>    /* 2nd stage */
>    for(i=0;i<10000;i++)
>       *(p[i])=1;
> }
>
> As you can see the first stage tries to allocate 40Mb of memory. Since
> I don't have that kind of memory it should fail ofcourse. To my
> surprise it didn't. (!)
> Well then, the second stage tries to access the 40Mb. At this point
> Linux figures out that that kind of memory isn't there, so it kind of
> hangs. Not really it just becomes increadably slow, I was able to exit
> the program with CTRL-C but it did take a few minutes to do that.
>
> BTW, this doesn't happen if I use calloc() instead of malloc(), but malloc
> is faster that calloc, so I prefer to malloc.
>
> Am I doing something wrong ? Or is it a bug in Linux or GCC ?
>
>
> Marc.
>

Have the same "problem".
The program top shows the 'real' memory you allocated, but it does not
exist.

Arjan


------------------------------------------
	Arjan Filius
	Email : IAfil...@et.tudelft.nl
------------------------------------------

Path: nntp.gmd.de!dearn!esoc!linuxed1!peernews.demon.co.uk!news.sprintlink.net!
 news.bluesky.net!usenet.eel.ufl.edu!news.mathworks.com!panix!not-for-mail
From: stimp...@panix.com (S. Joel Katz)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 8 Feb 1995 00:00:56 -0500
Organization: PANIX Public Access Internet and Unix, NYC
Lines: 59
Message-ID: <3h9j68$5q5@panix3.panix.com>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl>
NNTP-Posting-Host: panix3.panix.com

In <1995Feb7.172606.5...@tudedv.et.tudelft.nl> mnijwe...@et.tudelft.nl writes:

>Linux & the memory.

>I'm running Linux 1.1.88 on a 386DX-40 with 4Mb RAM, 17Mb swap partition
>My compiler is GCC 2.5.8

>As I was writing my program, I noticed an oddity (=bug?).
>It's probably best explained by a simple program:


[program deleted]

>As you can see the first stage tries to allocate 40Mb of memory. Since
>I don't have that kind of memory it should fail ofcourse. To my
>surprise it didn't. (!)
>Well then, the second stage tries to access the 40Mb. At this point
>Linux figures out that that kind of memory isn't there, so it kind of
>hangs. Not really it just becomes increadably slow, I was able to exit
>the program with CTRL-C but it did take a few minutes to do that.

>BTW, this doesn't happen if I use calloc() instead of malloc(), but malloc
>is faster that calloc, so I prefer to malloc.

>Am I doing something wrong ? Or is it a bug in Linux or GCC ?

	It is a feature in the Linux C library and GCC and is seldom
appreciated and little used. Allocating or declaring storage does nothing
in Linux except advance the process' break point.

	Linux does not actually allocate a page until a fault occurs,
such as when a read of write to the memory takes place. Then the fault
handler maps a page.

	I use this all the time in programs to save the hassle of dynamic
allocation. If I 'might need' up to 10,000,000 ints for something, I
allocate 10,000,000, safe in the knowledge that the allocation will never
fail. Then I use the array as I need 'em.

	For example, consider the following program

int nums[10000000];
int num_count=0;

 void main(void)
 {
  int j;
  while((j=get_num())!=-1)
   nums[num_count++]=j;
  for(j=0; j<num_count; j++)
  printf("%d->%d\n",j,nums[j];
 }

	Space allocated for up to 10,000,000 ints and it still won't
waste space if you only use a dozen. Damn convenient; no bug at all.
--

S. Joel Katz           Information on Objectivism, Linux, 8031s, and atheism
Stimp...@Panix.COM     is available at http://www.panix.com/~stimpson/

Path: nntp.gmd.de!newsserver.jvnc.net!nntpserver.pppl.gov!princeton!
gw1.att.com!csn!boulder!bloom-beacon.mit.edu!panix!news.mathworks.com!udel!
gatech!howland.reston.ans.net!math.ohio-state.edu!hobbes.physics.uiowa.edu!
newsrelay.iastate.edu!news.iastate.edu!dopey.me.iastate.edu!brekke
From: bre...@dopey.me.iastate.edu (Monty H. Brekke)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 8 Feb 1995 20:16:11 GMT
Organization: Iowa State University, Ames IA
Lines: 60
Message-ID: <3hb8qb$669@news.iastate.edu>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<3h9j68$5q5@panix3.panix.com>
NNTP-Posting-Host: dopey.me.iastate.edu

In article <3h9j68$...@panix3.panix.com>,
S. Joel Katz <stimp...@panix.com> wrote:
>
>	It is a feature in the Linux C library and GCC and is seldom 
>appreciated and little used. Allocating or declaring storage does nothing 
>in Linux except advance the process' break point.
>
>	Linux does not actually allocate a page until a fault occurs, 
>such as when a read of write to the memory takes place. Then the fault 
>handler maps a page.
>
>	I use this all the time in programs to save the hassle of dynamic 
>allocation. If I 'might need' up to 10,000,000 ints for something, I 
>allocate 10,000,000, safe in the knowledge that the allocation will never 
>fail. Then I use the array as I need 'em.
>
>	For example, consider the following program
>
>int nums[10000000];
>int num_count=0;
>
> void main(void)
> {
>  int j;
>  while((j=get_num())!=-1)
>   nums[num_count++]=j;
>  for(j=0; j<num_count; j++)
>  printf("%d->%d\n",j,nums[j];
> }
>
>	Space allocated for up to 10,000,000 ints and it still won't 
>waste space if you only use a dozen. Damn convenient; no bug at all.
>-- 

   I've noticed this feature on other operating systems also. The thing
that bothers me is that if I request more memory than I have available
(phsical + swap), my program has no way (as far as I can tell) of
knowing when/if an out-of-memory condition occurs. Say, for example,
that I have allocated space for 25,000,000 integers, at 4 bytes each.
That's 100,000,000 bytes of memory. I've got 16MB physical and 32MB of
swap. Clearly, then, the following loop will fail at some point.

	for (i = 0; i < 25000000; ++i)
	   huge_array[i] = 0;

   How does my program know that this loop generated a memory fault?
Can I catch some signal? AT any rate, it seems like it would be simpler
to be able to count on malloc()'s return value being correct. I can
understand the advantage of the current implementation when the amount
of memory requested is less than the total available, but I fail to
see why malloc() doesn't return a failure when I try to request more
memory than I can possibly allocate. Anyone?



-- 
===============================================================================
mhbre...@iastate.edu		| "You don't have to thank me. I'm just trying
bre...@dopey.me.iastate.edu	| to avoid getting a real job."
				|				--Dave Barry

Path: swrinde!howland.reston.ans.net!news.sprintlink.net!news.bluesky.net!
solaris.cc.vt.edu!news.mathworks.com!panix!not-for-mail
From: stimp...@panix.com (S. Joel Katz)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 9 Feb 1995 08:28:43 -0500
Organization: PANIX Public Access Internet and Unix, NYC
Lines: 54
Message-ID: <3hd5ab$90j@panix3.panix.com>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<3h9j68$5q5@panix3.panix.com> <3hb8qb$669@news.iastate.edu>
NNTP-Posting-Host: panix3.panix.com

In <3hb8qb$...@news.iastate.edu> bre...@dopey.me.iastate.edu (Monty H. Brekke) 
writes:

>   I've noticed this feature on other operating systems also. The thing
>that bothers me is that if I request more memory than I have available
>(phsical + swap), my program has no way (as far as I can tell) of
>knowing when/if an out-of-memory condition occurs. Say, for example,
>that I have allocated space for 25,000,000 integers, at 4 bytes each.
>That's 100,000,000 bytes of memory. I've got 16MB physical and 32MB of
>swap. Clearly, then, the following loop will fail at some point.

>	for (i = 0; i < 25000000; ++i)
>	   huge_array[i] = 0;

>   How does my program know that this loop generated a memory fault?
>Can I catch some signal? AT any rate, it seems like it would be simpler
>to be able to count on malloc()'s return value being correct. I can
>understand the advantage of the current implementation when the amount
>of memory requested is less than the total available, but I fail to
>see why malloc() doesn't return a failure when I try to request more
>memory than I can possibly allocate. Anyone?

	The problem with malloc failing is it would break the program I 
showed above. Programs often malloc huge arrays (larger than they will 
ever need) and count on them working. If the program later really 
requires more RAM than it allocated, of course, it will fail.

	As a simple example, a 'disassociated press' program I worte 
allocates space for 10,000,000 word nodes at about 16 bytes apiece. This 
program would fail on any system with less than 160M of virtual memory if 
all of the memory was really allocated immediately.

	If you want, you can write a '1' every 4K to force the memory to 
be instantiated, but this is a horrible waste. Many programs allocate 
memory they never use or do not use until much later in their execution. 
Linux is very smart about this.

	If you really care, you can always read /proc/meminfo and see how 
much memory is available.

	I am quite happy with the present Linux implementation and find 
taking advantage of it a win-win situation over dynamic allocation (which 
has execution penalties) or truly allocating the maximum needed (which 
has space penalities).

	Though, a signal that a program could request that would be sent 
to it if memory started to get 'low' might be nice. Though, if you really 
need the RAM (which you presumably do since you wrote to it),what can you 
do. Paging to disk is silly, that is what swap was for.


-- 

S. Joel Katz           Information on Objectivism, Linux, 8031s, and atheism
Stimp...@Panix.COM     is available at http://www.panix.com/~stimpson/

Newsgroups: comp.os.linux.development.apps
Path: nntp.gmd.de!dearn!
 esoc!linuxed1!peernews.demon.co.uk!news.sprintlink.net!
howland.reston.ans.net!EU.net!ub4b!imec.be!buytaert
From: buyta...@imec.be (Steven Buytaert)
Subject: Re: Linux is 'creating' memory ?!
Message-ID: <1995Feb10.093116.20768@imec.be>
Sender: n...@imec.be (USENET News System)
Nntp-Posting-Host: galaxa
Organization: IMEC, Interuniversitair Micro Electronica Centrum, Belgium
X-Newsreader: TIN [version 1.2 PL0]
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl>
Date: Fri, 10 Feb 1995 09:31:16 GMT
Lines: 37

mnijwe...@et.tudelft.nl wrote:

: As I was writing my program, I noticed an oddity (=bug?).
: It's probably best explained by a simple program:

:    for(i=0;i<10000;i++) {
:       p[i]=malloc(4096)
:       if (p[i]==NULL) {
:          fprintf(stderr,"Out of memory\n");
:          exit(1);
:       }
:    }
:    for(i=0;i<10000;i++)
:       *(p[i])=1;

: As you can see the first stage tries to allocate 40Mb of memory. Since
: I don't have that kind of memory it should fail ofcourse. To my
: surprise it didn't. (!)
: Well then, the second stage tries to access the 40Mb. [...]

  The physical memory pages are not allocated until there is a reference
  to the pages. Check out /usr/src/linux/mm/*.c for more precise information.
  (When sbrk() is called, during a malloc, a vm_area structure is enlarged
  or created, it's not until a page fault that a page is realy taken to
  use it)

  It's not a bug. IMHO, a program should allocate and use the storage as
  it goes, not in chunks of 40Megabytes...

--
Steven Buytaert

WORK buyta...@imec.be
HOME buyta...@clever.be

	'Imagination is more important than knowledge.'
			(A. Einstein)

Path: nntp.gmd.de!newsserver.jvnc.net!news.cac.psu.edu!news.pop.psu.edu!
hudson.lm.com!netline-fddi.jpl.nasa.gov!nntp.et.byu.edu!news.mtholyoke.edu!
uhog.mit.edu!bloom-beacon.mit.edu!eru.mt.luth.se!news.luth.se!sunic!
news.funet.fi!news.csc.fi!news.helsinki.fi!not-for-mail
From: wirze...@cc.Helsinki.FI (Lars Wirzenius)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 12 Feb 1995 17:35:19 +0200
Organization: University of Helsinki
Lines: 64
Message-ID: <3hl9rn$t40@klaava.Helsinki.FI>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<3h9j68$5q5@panix3.panix.com> <3hb8qb$669@news.iastate.edu> 
<3hd5ab$90j@panix3.panix.com>
NNTP-Posting-Host: klaava.helsinki.fi
Mime-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 8bit

stimp...@panix.com (S. Joel Katz) writes:
> The problem with malloc failing is it would break the program I showed above.

That would be a good thing.  Seriously.  If a program can't rely on the
memory it has allocated to actually be usable, it can't handle low memory
situations intelligently.  Instant Microsoftware.  Instant trashing systems.
Instant "Linux is unreliable, let's buy SCO".  Instant end of the univ...,
er, forget that one, but it's not a good idea anyway.

There's more to writing good software than getting it through the
compiler.  Error handling is one of them, and Linux makes it impossible
to handle low memory conditions properly.  Score -1 big design misfeature
for Linus.

> Programs often malloc huge arrays (larger than they will 
> ever need) and count on them working.

I've never seen such a program, but they're buggy.  Any program using
malloc and not checking its return value is buggy.  Since malloc almost
always lies under Linux, all programs using malloc under Linux are
buggy.

This `lazy allocation' feature of Linux, and Linus's boneheadedness
about it, is about the only reason why I'm still not sure he isn't a
creature from outer space (oops, I'm going to be hit by a Koosh ball
the next time Linus comes to work :-).  The lazy allocation is done, as
far as I can remember from earlier discussions, to avoid a fork+exec
from requiring, even temporarily, twice the amount of virtual memory,
which would be expensive for, say, Emacs.  For this gain we sacrifice
reliability; not a very good sacrifice, in my opinion.  I also don't buy the
argument that it's important to make it easy to write sparse arrays.
(Such arrays are not all that common, and it's easy enough to implement
them in traditional systems.)

What would be needed, in my opinion, is at least a kernel compilation
or bootup option that allows the sysadmin to specify the desired behaviour,
perhaps even having a special system call so that each process can
decide for itself.  (That shouldn't even be all that difficult to write
for someone who rewrites the memory management in one day during a so
called code freeze.)

> 	As a simple example, a 'disassociated press' program I worte 
> allocates space for 10,000,000 word nodes at about 16 bytes apiece. This 
> program would fail on any system with less than 160M of virtual memory if 
> all of the memory was really allocated immediately.

Guess what it does on any system with reliable virtual memory.  Guess
what it does when you use more word nodes than there is memory for on
your Linux box.

> If you really care, you can always read /proc/meminfo and see how 
> much memory is available.

No you can't.  1) The OS might not allow you to use all that memory, and
duplicating memory allocation in every application so that it can check
it properly is rather stupid.  2) During the time between the check and
the allocation, the situation might change radically; e.g., some other
application might have allocated memory.  3) The free memory might be
a lie, e.g., the OS might automatically allocate more swap if there is
some free disk space.

-- 
Lars.Wirzen...@helsinki.fi  (finger wirze...@klaava.helsinki.fi)
Publib version 0.4: ftp://ftp.cs.helsinki.fi/pub/Software/Local/Publib/

Newsgroups: comp.os.linux.development.apps
Path: nntp.gmd.de!newsserver.jvnc.net!nntpserver.pppl.gov!princeton!
rutgers!uwvax!uchinews!quads!goer
From: g...@quads.uchicago.edu (Richard L. Goerwitz)
Subject: Re: Linux is 'creating' memory ?!
X-Nntp-Posting-Host: midway.uchicago.edu
Message-ID: <D3z5tv.GnH@midway.uchicago.edu>
Sender: n...@midway.uchicago.edu (News Administrator)
Reply-To: g...@midway.uchicago.edu
Organization: The University of Chicago
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<3hb8qb$669@news.iastate.edu> <3hd5ab$90j@panix3.panix.com> 
<3hl9rn$t40@klaava.Helsinki.FI>
Date: Tue, 14 Feb 1995 05:27:31 GMT
Lines: 13

In article <3hl9rn$...@klaava.Helsinki.FI>, Lars Wirzenius 
<wirze...@cc.Helsinki.FI> wrote:
>
>This `lazy allocation' feature of Linux, and Linus's boneheadedness
>about it, is about the only reason why I'm still not sure he isn't a
>creature from outer space (oops, I'm going to be hit by a Koosh ball
>the next time Linus comes to work :-).

Geez, I'd hit you with more than that if you were my co-worker.
Boneheadedness?

-- 

   Richard L. Goerwitz     ***      g...@midway.uchicago.edu

Path: nntp.gmd.de!news.rwth-aachen.de!tornado.oche.de!RNI!artcom0!pf
From: p...@artcom0.north.de (Peter Funk)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Message-ID: <5715@artcom0.north.de>
Date: 14 Feb 95 06:59:19 GMT
Article-I.D.: artcom0.5715
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<3h9j68$5q5@panix3.panix.com> <3hb8qb$669@news.iastate.edu> 
<3hd5ab$90j@panix3.panix.com> <3hl9rn$t40@klaava.Helsinki.FI>
Organization: home workstation, but owned by ArtCom GmbH, Bremen, FRG
Lines: 13

In <3hl9rn$...@klaava.Helsinki.FI> wirze...@cc.Helsinki.FI (Lars Wirzenius) writes:
[...] The lazy allocation is done, as
> far as I can remember from earlier discussions, to avoid a fork+exec
> from requiring, even temporarily, twice the amount of virtual memory,
> which would be expensive for, say, Emacs.  For this gain we sacrifice
> reliability; not a very good sacrifice, in my opinion.  

Wouldn't a 'vfork' solve this problem ?  What's wrong with 'vfork' ?

Regards, Peter
-=-=-
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany
office: +49 421 2041921 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)

Path: nntp.gmd.de!news.rwth-aachen.de!fred.basl.rwth-aachen.de!ralf
From: r...@fred.basl.rwth-aachen.de (Ralf Schwedler)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 16 Feb 1995 09:30:17 GMT
Organization: Institute for Semiconductor Technology, RWTH Aachen, Germany
Lines: 57
Distribution: world
Message-ID: <3hv5v9$46e@news.rwth-aachen.de>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<1995Feb10.093116.20768@imec.be>
Reply-To: r...@fred.basl.rwth-aachen.de (Ralf Schwedler)
NNTP-Posting-Host: fred.basl.rwth-aachen.de
X-Newsreader: mxrn 6.18-9


In article <1995Feb10.093116.20...@imec.be>, buyta...@imec.be 
(Steven Buytaert) writes:
mnijwe...@et.tudelft.nl wrote:

: As I was writing my program, I noticed an oddity (=bug?). 
: It's probably best explained by a simple program:

:    for(i=0;i<10000;i++) {
:       p[i]=malloc(4096)
:       if (p[i]==NULL) {
:          fprintf(stderr,"Out of memory\n");
:          exit(1);
:       }
:    }
:    for(i=0;i<10000;i++)
:       *(p[i])=1;

: As you can see the first stage tries to allocate 40Mb of memory. Since 
: I don't have that kind of memory it should fail ofcourse. To my 
: surprise it didn't. (!)
: Well then, the second stage tries to access the 40Mb. [...]

I have read about all of this thread. I think I understand the (mainly
efficiency oriented) arguments which support this behaviour. It's
probably not useful to discuss changing this behaviour, as some software
may rely on this.

Anyhow, from the point of view of an application programmer, I consider
the way malloc is realized absolutely dangerous. I want to be able to
handle error conditions as close as possible to the point of their
origin. The definition of malloc is 'allocate memory', not
'intend to allocate memory'. I want to decide myself how to handle
memory overflow conditions; from that point of view I cannot accept
any program abort not controlled by my application. All hints given
so far (e.g. using some technique to find the amount of free memory) 
are useless (If I understood it well, even calloc will abort in situations
where the memory is not available; please stop reading here if this is not
the case). Such methods would rely on friendly behaviour of other apps
running; which is not acceptable in a multitasking environment.

My question:

	Is there a version of malloc available for Linux which guarantees
	allocation of memory, or returns NULL (this is the functionality
	which I consider safest for programming) ? Maybe -libnmalloc?

Thanks,

	Ralf

-- 
#####################################################################
Dipl.-Phys. Ralf Schwedler			Tel. +49-241-80-7908
Institut fuer Halbleitertechnik II		Fax. +49-241-8888-246
Sommerfeldstrasse 24				r...@fred.basl.rwth-aachen.de
D-52074 Aachen

Path: nntp.gmd.de!stern.fokus.gmd.de!ceres.fokus.gmd.de!zib-berlin.de!
news.mathworks.com!panix!bloom-beacon.mit.edu!spool.mu.edu!
howland.reston.ans.net!pipex!sunic!news.funet.fi!news.csc.fi!
news.helsinki.fi!not-for-mail
From: wirze...@cc.Helsinki.FI (Lars Wirzenius)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 19 Feb 1995 18:33:16 +0200
Organization: University of Helsinki
Lines: 10
Sender: wirze...@cc.helsinki.fi
Message-ID: <3i7rsc$enq@kruuna.helsinki.fi>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<3hd5ab$90j@panix3.panix.com> <3hl9rn$t40@klaava.Helsinki.FI> 
<D3z5tv.GnH@midway.uchicago.edu>
NNTP-Posting-Host: kruuna.helsinki.fi
Mime-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 8bit

g...@midway.uchicago.edu writes:
> Geez, I'd hit you with more than that if you were my co-worker.
> Boneheadedness?

As it happens, Linus seems to have missed my article altogether.  I haven't
been hit by anything yet. :-)

-- 
Lars.Wirzen...@helsinki.fi  (finger wirze...@klaava.helsinki.fi)
Publib version 0.4: ftp://ftp.cs.helsinki.fi/pub/Software/Local/Publib/

Path: nntp.gmd.de!stern.fokus.gmd.de!ceres.fokus.gmd.de!zib-berlin.de!
news.mathworks.com!panix!bloom-beacon.mit.edu!spool.mu.edu!
howland.reston.ans.net!pipex!sunic!news.funet.fi!news.csc.fi!
news.helsinki.fi!not-for-mail
From: wirze...@cc.Helsinki.FI (Lars Wirzenius)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 19 Feb 1995 18:37:37 +0200
Organization: University of Helsinki
Lines: 11
Sender: wirze...@cc.helsinki.fi
Message-ID: <3i7s4h$eul@kruuna.helsinki.fi>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<3hd5ab$90j@panix3.panix.com> <3hl9rn$t40@klaava.Helsinki.FI> 
<5715@artcom0.north.de>
NNTP-Posting-Host: kruuna.helsinki.fi
Mime-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 8bit

p...@artcom0.north.de (Peter Funk) writes:
> Wouldn't a 'vfork' solve this problem ?  What's wrong with 'vfork' ?

The problem with vfork is that it doesn't solve the problem for
programs that don't use it; many programs don't.  It's semantics are
also stupid (although necessary).  The same speed can be achieved with
copy-on-write and other memory management trickery.

-- 
Lars.Wirzen...@helsinki.fi  (finger wirze...@klaava.helsinki.fi)
Publib version 0.4: ftp://ftp.cs.helsinki.fi/pub/Software/Local/Publib/

Newsgroups: comp.os.linux.development.apps
Path: nntp.gmd.de!stern.fokus.gmd.de!ceres.fokus.gmd.de!zib-berlin.de!
fu-berlin.de!news.dfn.de!swiss.ans.net!howland.reston.ans.net!
news.sprintlink.net!pipex!uknet!info!iialan
From: iia...@iifeak.swan.ac.uk (Alan Cox)
Subject: Re: Linux is 'creating' memory ?!
X-Nntp-Posting-Host: iifeak.swan.ac.uk
Message-ID: <D4D59G.AwE@info.swan.ac.uk>
Sender: n...@info.swan.ac.uk
Organization: Institute For Industrial Information Technology
References: <3hb8qb$669@news.iastate.edu> <3hd5ab$90j@panix3.panix.com> 
<3hl9rn$t40@klaava.Helsinki.FI>
Date: Tue, 21 Feb 1995 18:41:40 GMT
Lines: 23

In article <3hl9rn$...@klaava.Helsinki.FI> wirze...@cc.Helsinki.FI 
(Lars Wirzenius) writes:
>situations intelligently.  Instant Microsoftware.  Instant trashing systems.
>Instant "Linux is unreliable, let's buy SCO".  Instant end of the univ...,
>er, forget that one, but it's not a good idea anyway.

Tried SCO with any resource limits on the problem.

>There's more to writing good software than getting it through the
>compiler.  Error handling is one of them, and Linux makes it impossible
>to handle low memory conditions properly.  Score -1 big design misfeature
>for Linus.

Scientists like it that way, other people should read the limit/rusage
man pages.

Alan


-- 
  ..-----------,,----------------------------,,----------------------------,,
 // Alan Cox  //  iia...@www.linux.org.uk   //  GW4PTS@GB7SWN.#45.GBR.EU  //
 ``----------'`--[Anti Kibozing Signature]-'`----------------------------''
One two three: Kibo, Lawyer, Refugee :: Green card, Compaq come read me...

Newsgroups: comp.os.linux.development.apps
Path: nntp.gmd.de!newsserver.jvnc.net!news.cac.psu.edu!news.pop.psu.edu!
hudson.lm.com!godot.cc.duq.edu!ddsw1!panix!news.mathworks.com!
news.alpha.net!uwm.edu!uwvax!astroatc!nicmad!madnix!galyean
From: galy...@madnix.uucp (Marty Galyean)
Subject: Re: Linux is 'creating' memory ?!
X-Newsreader: TIN [version 1.2 PL2]
Organization: ARP Software
Message-ID: <1995Feb21.174848.27897@madnix.uucp>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<3hd5ab$90j@panix3.panix.com> <3hl9rn$t40@klaava.Helsinki.FI> 
<5715@artcom0.north.de> <3i7s4h$eul@kruuna.helsinki.fi>
Date: Tue, 21 Feb 1995 17:48:48 GMT
Lines: 52

Lars Wirzenius (wirze...@cc.Helsinki.FI) wrote:
: p...@artcom0.north.de (Peter Funk) writes:
: > Wouldn't a 'vfork' solve this problem ?  What's wrong with 'vfork' ?

: The problem with vfork is that it doesn't solve the problem for
: programs that don't use it; many programs don't.  It's semantics are
: also stupid (although necessary).  The same speed can be achieved with
: copy-on-write and other memory management trickery.

: -- 
: Lars.Wirzen...@helsinki.fi  (finger wirze...@klaava.helsinki.fi)
: Publib version 0.4: ftp://ftp.cs.helsinki.fi/pub/Software/Local/Publib/

After reading this thread it seems there are two views at work...
the first says that a program should either get the memory it wants
guaranteed, or be told it can't...while the other view is that the
previous view is too inefficient and that a program should rely on
swapping on demand to handle fault and just not worry about
a real situation of no memory, swap or otherwise, available.

Neither of these seems very satisfying for all the reasons discussed
previously in the thread.  

However, I kind of like the way Linux works.  Here's why... People are fond
of presenting the fact that in a multitasking env memory that was avail a
moment before may not be there a moment later. But guys, the opposite is
also true...memory that did not appear available a moment before might be
*freed* a moment later, and thus be available...OS's are becoming
sophisticated enough that you just can't plan everything out
deterministically...your program has to go with the flow and adjust.

I also agree (with a previous post) that signals to indicate system load,
swap frequency, etc. would be nice...and integral to any program that does
'go with the flow'...
It would be nice if your program could just take a look around, see that
its just too hard to get anything useful done, and stop with appropriate
messages...perhaps with the option of resuming where it left off later
automatically.  This could be done just be looking at the system time
once in a while to measure lag...doesn't really need os support...
this would be gambling, of course.

I don't like the idea that if my program didn't look quick enough or
guessed wrong it could fail ungracefully when swap space ran out.  It does
not seem right...new signals could make this a little easier, but the
unavoidable fact is that you can never guarantee you have access to 'your'
memory ...kind of like reservations on the airlines...I can't see
either of these as as ever being 'easy-to-error-handle' situations ;-)
Things like this keep things interesting though.

Marty
galy...@madnix.uucp

Path: nntp.gmd.de!Germany.EU.net!EU.net!news.sprintlink.net!
howland.reston.ans.net!news.cac.psu.edu!news.pop.psu.edu!hudson.lm.com!
godot.cc.duq.edu!newsfeed.pitt.edu!ddj
From: d...@pitt.edu (Doug DeJulio)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 22 Feb 1995 22:05:10 GMT
Organization: University of Pittsburgh
Lines: 34
Message-ID: <3igcem$mop@usenet.srv.cis.pitt.edu>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<5715@artcom0.north.de> <3i7s4h$eul@kruuna.helsinki.fi> 
<1995Feb21.174848.27897@madnix.uucp>
NNTP-Posting-Host: stingray.labs.cis.pitt.edu

In article <1995Feb21.174848.27...@madnix.uucp>,
Marty Galyean <galy...@madnix.uucp> wrote:
>After reading this thread it seems there are two views at work...
>the first says that a program should either get the memory it wants
>guaranteed, or be told it can't...while the other view is that the
>previous view is too inefficient and that a program should rely on
>swapping on demand to handle fault and just not worry about
>a real situation of no memory, swap or otherwise, available.

Either behavior should be available.  Both functionalities should be
present.

Any function defined by POSIX should conform exactly to the behavior
POSIX specifies.  This is very important.  We can't claim Linux is a
POSIX OS if it openly violates standards on purpose.

If the standard does not specify the exact way "malloc()" is supposed
to perform, then no POSIX-compliant C program can depend on either
behavior.  You've got to write all your programs assuming either
behavior could occur, or they're not portable.

Any functionality not offered within the POSIX standard should be done
via extensions of some sort.

If you disagree with any of these assertions besides the first one
(that both behaviors should be present), you're basically saying that
it's not important that Linux attempt to conform to the POSIX
standard.

So, what *does* the POSIX standard say about the behavior of malloc()?
-- 
Doug DeJulio                    | R$+@$=W  <-- sendmail.cf file
mailto:d...@pitt.edu            | {$/{{.+  <-- modem noise
http://www.pitt.edu/~ddj/       | !@#!@@!  <-- Mr. Dithers swearing

Path: nntp.gmd.de!stern.fokus.gmd.de!ceres.fokus.gmd.de!zib-berlin.de!
uni-duisburg.de!RRZ.Uni-Koeln.DE!news.dfn.de!Germany.EU.net!EU.net!
news.sprintlink.net!howland.reston.ans.net!math.ohio-state.edu!jussieu.fr!
news.univ-angers.fr!news.univ-rennes1.fr!zaphod.crihan.fr!u-psud.fr!
linotte.republique.fr!not-for-mail
From: bousch%linotte.u...@topo.math.u-psud.fr (Thierry Bousch)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 24 Feb 1995 14:01:11 +0100
Organization: Boulevard du Crime
Lines: 20
Message-ID: <3iklan$278@linotte.republique.fr>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> <5715@artcom0.north.de> 
<3i7s4h$eul@kruuna.helsinki.fi> <1995Feb21.174848.27897@madnix.uucp> 
<3igcem$mop@usenet.srv.cis.pitt.edu>
NNTP-Posting-Host: topo.matups.fr
X-Newsreader: TIN [version 1.2 PL2]

Doug DeJulio (d...@pitt.edu) wrote:

: So, what *does* the POSIX standard say about the behavior of malloc()?

Nothing. The malloc() function doesn't belong to the POSIX standard.
(It conforms to ANSI C).

The problem, unfortunately, is not only with malloc(). On most Unix systems,
the stack is automatically expanded when needed; therefore, any procedure
call is an implicit memory allocation; if it fails, how are you going to
report the error to the user? There is no way to handle this kind of
errors gracefully, you have to suspend or to kill the process.

Note also that if you really run out of virtual memory, the system is
probably already paging like hell, and you won't be able to do anything
useful on it; it's not very different from a freezed system, and you'll
probably have to hit the Big Red Button anyway because even Ctrl-Alt-Del
won't respond (in a reasonable time, that is).

Thierry.

Path: nntp.gmd.de!news.rwth-aachen.de!news.rhrz.uni-bonn.de!RRZ.Uni-Koeln.DE!
uni-duisburg.de!zib-berlin.de!news.mathworks.com!udel!gatech!newsfeed.pitt.edu!
uunet!psinntp!gatekeeper.nsc.com!voder!apple.com!NewsWatcher!user
From: br...@newton.apple.com (Bruce Thompson)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: Sat, 25 Feb 1995 09:33:31 -0800
Organization: Apple Computer Inc.
Lines: 46
Message-ID: <bruce-2502950933310001@17.205.4.52>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<3h9j68$5q5@panix3.panix.com> <3hb8qb$669@news.iastate.edu> 
<3hd5ab$90j@panix3.panix.com> <3hl9rn$t40@klaava.Helsinki.FI> 
<5715@artcom0.north.de>
NNTP-Posting-Host: 17.205.4.52

In article <5...@artcom0.north.de>, p...@artcom0.north.de (Peter Funk) wrote:

> In <3hl9rn$...@klaava.Helsinki.FI> wirze...@cc.Helsinki.FI (Lars
Wirzenius) writes:
> [...] The lazy allocation is done, as
> > far as I can remember from earlier discussions, to avoid a fork+exec
> > from requiring, even temporarily, twice the amount of virtual memory,
> > which would be expensive for, say, Emacs.  For this gain we sacrifice
> > reliability; not a very good sacrifice, in my opinion.  
> 
> Wouldn't a 'vfork' solve this problem ?  What's wrong with 'vfork' ?
> 
> Regards, Peter
> -=-=-
> Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany
> office: +49 421 2041921 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)

It would, but in private discussions, someone (sorry, I can't remember
who) pointed out that vfork was developed originally to get around bugs in
the Copy-on-write implementation on VAXes. The Linux kernel apparently
already does copy-on-write on forks, so the difference between fork and
vfork is now irrelevant.

Either way, I can't see that there's a _valid_ reason for keeping the
behavior. I hate to beat a dead horse, but I have to. The job of the
kernel is to manage the resources of the machine. By allowing processes to
think they've received more memory than they actual have, the kernel is
abdicating that responsibility. IMNSHO this is a Bad Thing(tm). I'm sure
I've mentioned it before, but it seems to me that a swap page could be
allocated (not written, just allocated) when pages are allocated to a
process. This would allow the kind of performance in the face of large
allocations that people may have come to expect. It would still ensure
that when the kernel told a process "here's a page" there actually _was_ a
page for that process. This last item is the whole point. Again, IMNSHO,
the kernel should never _EVER_ allocate resources it doesn't have.

   Cheers,
   Bruce.

-- 
--------------------------------------------------------------------
Bruce Thompson                  | "Never put off till tomorrow what
PIE Developer Information Group |  you can comfortably put off till
Apple Computer Inc.             |  next week."
                                |    -- Unknown
Usual Disclaimers Apply         |

Path: nntp.gmd.de!newsserver.jvnc.net!news.cac.psu.edu!news.pop.psu.edu!
hudson.lm.com!godot.cc.duq.edu!news.duke.edu!news.mathworks.com!uunet!
news.graphics.cornell.edu!ghost.dsi.unimi.it!univ-lyon1.fr!swidir.switch.ch!
scsing.switch.ch!cmir.arnes.si!news.fer.uni-lj.si!ana.fer.uni-lj.si!langod
From: lan...@ana.fer.uni-lj.si (Damjan Lango)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 27 Feb 1995 20:20:55 GMT
Organization: Faculty of Electrical and Computer Engeneering, Ljubljana, Slovenia
Lines: 139
Message-ID: <3itc77$9lj@ninurta.fer.uni-lj.si>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<3h9j68$5q5@panix3.panix.com> <3hb8qb$669@news.iastate.edu> 
<3hd5ab$90j@panix3.panix.com> <3hl9rn$t40@klaava.Helsinki.FI> 
<5715@artcom0.north.de> <bruce-2502950933310001@17.205.4.52>
NNTP-Posting-Host: ana.fer.uni-lj.si
X-Newsreader: TIN [version 1.2 PL2]

Bruce Thompson (br...@newton.apple.com) wrote:
: In article <5...@artcom0.north.de>, p...@artcom0.north.de (Peter Funk) wrote:

: > In <3hl9rn$...@klaava.Helsinki.FI> wirze...@cc.Helsinki.FI (Lars
: Wirzenius) writes:
: > [...] The lazy allocation is done, as
: > > far as I can remember from earlier discussions, to avoid a fork+exec
: > > from requiring, even temporarily, twice the amount of virtual memory,
: > > which would be expensive for, say, Emacs.  For this gain we sacrifice
: > > reliability; not a very good sacrifice, in my opinion.  
: > 
: > Wouldn't a 'vfork' solve this problem ?  What's wrong with 'vfork' ?

: It would, but in private discussions, someone (sorry, I can't remember
: who) pointed out that vfork was developed originally to get around bugs in
: the Copy-on-write implementation on VAXes. The Linux kernel apparently
: already does copy-on-write on forks, so the difference between fork and
: vfork is now irrelevant.

: Either way, I can't see that there's a _valid_ reason for keeping the
: behavior. I hate to beat a dead horse, but I have to. The job of the
: kernel is to manage the resources of the machine. By allowing processes to
: think they've received more memory than they actual have, the kernel is
: abdicating that responsibility. IMNSHO this is a Bad Thing(tm). I'm sure
: I've mentioned it before, but it seems to me that a swap page could be
: allocated (not written, just allocated) when pages are allocated to a
: process. This would allow the kind of performance in the face of large
: allocations that people may have come to expect. It would still ensure
: that when the kernel told a process "here's a page" there actually _was_ a
: page for that process. This last item is the whole point. Again, IMNSHO,
: the kernel should never _EVER_ allocate resources it doesn't have.

:    Cheers,
:    Bruce.

Absolutely agree!
And I can't understand how this malloc bug came so far up to 1.1.x
It *must* be fixed before 1.2!!!
Even all those shitty oses as dog windoze and NT does this in the right way...
(well ok, dog doesn't have virtual mem. but NT does)
I would realy like to see this fixed NOW or there will people start saying
hey this Linux sux, it can't even do memory allocation right!

Maybe I should give an example how is it done under NT if u want to have
this kind of behavior from malloc but controlled of course!
and malloc is still malloc but there is an additional VirtualAlloc.
I am not trying to say that there should be exactly a VirtualAlloc but
the current malloc should be at least renamed to something like 
hazard_malloc_with_hope and written a new bug free malloc!

Well here is an example of NT VirtualAlloc  for a very large bitmap
that has only a few pixels set:

BTW shouldn't we move this to comp.os.linux.development.system?

---8<---

#include 	<windows.h>
#define 	PAGESIZE	4096
#define		PAGELIMIT	100

class Bitmap{
private:
	BYTE	*lpBits;
	BYTE 	*pages[PAGELIMIT];
	WORD 	width,heigth;
	WORD 	page;
public:
	Bitmap(WORD width,WORD heigth);
	~Bitmap();

	void setPixel(WORD x,WORD y,BYTE c);
	void resetPixel(WORD x,WORD y);
	BYTE getPixel(WORD x,WORD y);
};

Bitmap::Bitmap(WORD w,WORD h){
	page=0;
	width=w;
	height=h;
	lpBits=(BYTE *)VirtualAlloc(NULL, // start
		w*h,		  // size
		MEM_RESERVE, PAGE_NOACCESS);
	assert(lpBits);	
}

Bitmap::~Bitmap(){
	for(int i=0;i<page;i++)	VirtualFree(pages[i],PAGESIZE,MEM_DECOMMIT);
	VirtualFree(lpBits,0,MEM_RELEASE);
}

void Bitmap::setPixel(WORD x,WORD y,BYTE c){
	try{
		lpBits[y*width+x]=c;
	}
	except(EXCEPTION_EXECUTE_HANDLER){	
		pages[page]=VirtualAlloc(
			lpBits+(y*width+x)/PAGESIZE,	//start
			PAGESIZE,			//size
			MEM_COMMIT, PAGE_READWRITE);
		assert(pages[page]);
		lpBits[y*width+x]=c;
	}
}

void Bitmap::resetPixel(WORD x,WORD y){
	try{
		lpBits[y*width+x]=0;
	}
	except(EXCEPTION_EXECUTE_HANDLER){
	}
}

BYTE Bitmap::getPixel(WORD x,WORD y){
	BYTE bit;

	try{
		bit=lpBits[y*width+x];
	}
	except(EXCEPTION_EXECUTE_HANDLER){
		bit=0;
	}
	return bit;
}


void main(void){
	Bitmap &bmp=*new bmp(10000,10000);
	bmp.setPixel(0,0);
	bmp.setPixel(5000,5000);
	bmp.setPixel(9999,9999);
	delete &bmp;
}
 
---8<---


  bye
		Damjan Lango

Path: nntp.gmd.de!news.rwth-aachen.de!news.rhrz.uni-bonn.de!
news.uni-stuttgart.de!rz.uni-karlsruhe.de!news.urz.uni-heidelberg.de!
sun0.urz.uni-heidelberg.de!hare
From: h...@mathi.uni-heidelberg.de (Hannes Reinecke)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 28 Feb 1995 12:57:26 GMT
Organization: University of Heidelberg, Germany
Lines: 50
Distribution: world
Message-ID: <HARE.95Feb28135726@mathi.uni-heidelberg.de>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<1995Feb10.093116.20768@imec.be>
	<3hv5v9$46e@news.rwth-aachen.de>
Reply-To: h...@mathi.uni-heidelberg.de
NNTP-Posting-Host: zarquon.mathi.uni-heidelberg.de
In-reply-to: ralf@fred.basl.rwth-aachen.de's message of 16 Feb 1995 09:30:17 GMT

>>>>> "Ralf" == Ralf Schwedler <r...@fred.basl.rwth-aachen.de> writes:

Ralf> In article <1995Feb10.093116.20...@imec.be>,
Ralf> buyta...@imec.be (Steven Buytaert) writes:

 [ malloc-prg deleted ]

Ralf> Anyhow, from the point of view of an application programmer,
Ralf> I consider the way malloc is realized absolutely
Ralf> dangerous. I want to be able to handle error conditions as
Ralf> close as possible to the point of their origin. The
Ralf> definition of malloc is 'allocate memory', not 'intend to
Ralf> allocate memory'.

Hmm. Having read this, i wondered whether you have heard about virtual
memory. _Every_ process has access to an so-called virtual memory
segment, which has under linux(i386) the size of 3 GB 
(cf <asm/processor.h>). So, if you malloc() normally, you will get (in
best cases) this amount (unless the system crashes :-).
The amount of installed physical memory is mere a matter of speed.
 
Ralf> I want to decide myself how to handle
Ralf> memory overflow conditions; from that point of view I cannot
Ralf> accept any program abort not controlled by my
Ralf> application.

In normal conditions, in fact you are the only one responsible for
out-of-memory cases created by your program; as far as the system is
concerned, it will only deny to give you any memory (i.e. malloc and
friends will return NULL).
 
Ralf> All hints given so far (e.g. using some
Ralf> technique to find the amount of free memory) are useless (If
Ralf> I understood it well, even calloc will abort in situations
Ralf> where the memory is not available; please stop reading here
Ralf> if this is not the case). Such methods would rely on
Ralf> friendly behaviour of other apps running; which is not
Ralf> acceptable in a multitasking environment.

Really ?

Have fun

Hannes
-------
Hannes Reinecke			     |
<h...@vogon.mathi.uni-heidelberg.de> |  XVII.: WHAT ?
				     | 	
PGP fingerprint available            | 		T.Pratchett: Small Gods
see  'finger' for details	     |		

Newsgroups: comp.os.linux.development.apps
Path: nntp.gmd.de!news.rwth-aachen.de!news.rhrz.uni-bonn.de!
news.uni-stuttgart.de!news.belwue.de!delos.stgt.sub.org!delos.stgt.sub.org!
news.maz.net!news.ppp.de!xlink.net!howland.reston.ans.net!ix.netcom.com!
netcom.com!csus.edu!uop!pacbell.com!att-out!nntpa!not-for-mail
From: v...@rhea.cnet.att.com (Vivek Kalra)
Subject: Re: Linux is 'creating' memory ?!
Message-ID: <D4pvFq.GpE@nntpa.cb.att.com>
Sender: n...@nntpa.cb.att.com (Netnews Administration)
Nntp-Posting-Host: rhea.cnet.att.com
Organization: AT&T
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<1995Feb21.174848.27897@madnix.uucp> <3igcem$mop@usenet.srv.cis.pitt.edu> 
<3iklan$278@linotte.republique.fr>
Date: Tue, 28 Feb 1995 15:38:14 GMT
Lines: 38

In article <3iklan$...@linotte.republique.fr>,
Thierry Bousch <bousch%linotte.u...@topo.math.u-psud.fr> wrote:
>
>Note also that if you really run out of virtual memory, the system is
>probably already paging like hell, and you won't be able to do anything
>useful on it; it's not very different from a freezed system, and you'll
>probably have to hit the Big Red Button anyway because even Ctrl-Alt-Del
>won't respond (in a reasonable time, that is).
>
Okay, let's see: I have a machine with 8M of RAM and 12M of swap.
At this given moment, I have, say, 8 of those megs available.  So I
run this super-duper image-processing program I have -- it checks
the current input size and determines that it needs 16M of memory
to do its thing on this input.  So it malloc()s 16M and finds that
everything is fine and starts its thing, runs for three hours, and,
err, ooops, runs out of memory.  Now, if malloc() had failed
earlier, I wouldn't have had to wait for three hours to find that
out, would I?  Presumably, the program would have just told me at
the very beginning that not enough memory was available to do its
thing on the current input.  And, no, the system before running
this program need not have been paging like hell, as you put it -- 
there was 8M of memory available, remember?

Even worse, I might have a program that may already have modified
its input before finding out that it cannot finish its thing
becuase of lack of memory and so cannot write out the correct
output -- but the input is gone too.  So now what?

The problems of not handling a NULL return from malloc() are well
known.  To have a malloc() that might fail in a way that doesn't
give the programmer any chance to recover is just mind-boggling.

Vivek
-- 
Vivek                        email address                      signature
dsclmr: any ideas above, if there, are mine.  All mine.  And an illusion.
    Oh, what a tangled web we weave, when first we practice to weave.
    Quote for the day: '

Path: nntp.gmd.de!news.rwth-aachen.de!news.rhrz.uni-bonn.de!
news.uni-stuttgart.de!rz.uni-karlsruhe.de!i13a3.ira.uka.de!rogina
From: rog...@ira.uka.de (Ivica Rogina)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 28 Feb 1995 19:35:18 GMT
Organization: OrgFreeware
Lines: 18
Sender: rog...@i13a3.ira.uka.de (Ivica Rogina)
Distribution: world
Message-ID: <3ivttm$106@nz12.rz.uni-karlsruhe.de>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<1995Feb10.093116.20768@imec.be>  <3hv5v9$46e@news.rwth-aachen.de> 
<HARE.95Feb28135726@mathi.uni-heidelberg.de>
Reply-To: rog...@ira.uka.de (Ivica Rogina)
NNTP-Posting-Host: i13a3.ira.uka.de
Mime-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit


h...@mathi.uni-heidelberg.de (Hannes Reinecke) wrote:

> Hmm. Having read this, i wondered whether you have heard about virtual
> memory. _Every_ process has access to an so-called virtual memory
> segment, which has under linux(i386) the size of 3 GB 
> (cf <asm/processor.h>). So, if you malloc() normally, you will get (in
> best cases) this amount (unless the system crashes :-).

This is not a matter of virtual memory. If I do a malloc(), I don't care
what the size of the RAM or the swap space or the virtual memory is.
Whatever it is, I want to be sure that I can use all the memory that was 
assigned to me without having to wait for the sysop to push in another
couple-of-gigs-disc.
And, I don't want any user to be able to bring the entire system to a halt
by simply allocating a lot of memory. 

Ivica

Newsgroups: comp.os.linux.development.apps
From: ja...@purplet.demon.co.uk (Mike Jagdis)
Path: nntp.gmd.de!stern.fokus.gmd.de!ceres.fokus.gmd.de!zib-berlin.de!
news.mathworks.com!udel!gatech!howland.reston.ans.net!news.sprintlink.net!
peernews.demon.co.uk!purplet!jaggy
Subject: Re: Linux is 'creating' memory ?!
Organization: FidoNet node 2:252/305 - The Purple Tentacle, Reading
X-Posting-Host: purplet.demon.co.uk
Date: Sun, 5 Mar 1995 14:06:00 +0000
Message-ID: <820.2F5A31DA@purplet.demon.co.uk>
Sender: use...@demon.co.uk
Lines: 35

* In message <3ivttm$...@nz12.rz.uni-karlsruhe.de>, Ivica Rogina said:

IR> This is not a matter of virtual memory. If I do a malloc(),
IR> I don't care
IR> what the size of the RAM or the swap space or the virtual
IR> memory is.
IR> Whatever it is, I want to be sure that I can use all the
IR> memory that was
IR> assigned to me without having to wait for the sysop to push
IR> in another couple-of-gigs-disc.

Then you *have* to dirty each page in the area you request yourself to 
forcibly map them as individual, distinct pages.

  What the less experienced application writers don't realise is that even 
the kernel has no way of knowing just how much memory+swap is really usable 
to any one time. Text regions may be paged from the executable file - they 
may or may not require a physical memory page at any moment and *never* 
require a swap page. Similarly the OS cannot know in advance which pages 
will be shared and which will require a new page to be used, nor can it know 
when a shared page will need to be split due to a copy on write.

  The *only* way the OS could guarantee to have a page available for you is 
to take the most pessimistic view and save a swap page for *every* possible 
page used - i.e. every process requires text pages + data pages + shared 
library pages of swap (shared libraries are shared in memory but require 
distinct swap for each process). And then you have to figure out how to 
handle stack allocations which can probably only be guaranteed by committing 
plenty (a few meg? gig?) of pages...

  Seriously, if your programmers cannot handle this they should be trained 
or moved back to non-VM programming.

                                Mike 

Newsgroups: comp.os.linux.development.apps
Path: bga.com!news.sprintlink.net!howland.reston.ans.net!math.ohio-state.edu!
uwm.edu!lll-winken.llnl.gov!fnnews.fnal.gov!gw1.att.com!nntpa!not-for-mail
From: v...@rhea.cnet.att.com (Vivek Kalra)
Subject: Re: Linux is 'creating' memory ?!
Message-ID: <D51CGt.LFo@nntpa.cb.att.com>
Sender: n...@nntpa.cb.att.com (Netnews Administration)
Nntp-Posting-Host: rhea.cnet.att.com
Organization: AT&T
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<3iklan$278@linotte.republique.fr> <D4pvFq.GpE@nntpa.cb.att.com> 
<JEM.95Mar5213712@delta.hut.fi>
Date: Mon, 6 Mar 1995 20:19:40 GMT
Lines: 93

In article <JEM.95Mar5213...@delta.hut.fi>,
Johan Myreen <j...@snakemail.hut.fi> wrote:
>In article <D4pvFq....@nntpa.cb.att.com> v...@rhea.cnet.att.com (Vivek Kalra) 
writes:
>
>>Okay, let's see: I have a machine with 8M of RAM and 12M of swap.
>>At this given moment, I have, say, 8 of those megs available.  So I
> ^^^^^^^^^^^^^^^^^^^^
>
>>run this super-duper image-processing program I have -- it checks
>>the current input size and determines that it needs 16M of memory
>>to do its thing on this input.  So it malloc()s 16M and finds that
>>everything is fine and starts its thing, runs for three hours, and,
>>err, ooops, runs out of memory.  Now, if malloc() had failed
>>earlier, I wouldn't have had to wait for three hours to find that
>>out, would I?
>
>I agree that this situation is not so good. But if you think of the
>other side of this, what if you had started your program and it would
>had refused to do anything, because it would have needed 16 Mbytes
>three hours later, and the memory *had* been available at that time?
>
But what if it was *not* three hours but three seconds?  The point
is that ANSI/ISO *require* malloc() to return NULL if the asked for
memory cannot be allocated so the programmer can take appropriate
action.  A program, for example, should be able to remove a file if
malloc() does not return NULL and the program is in the process of
recreating that file.  To have program simply fail (simply?  Just
*how* does it fail?  Geez, and I thought I was not using
MSWindoze...) after its destructive behaviour is simply not
acceptable.  Would you like my bank-account program to fail without
updating your bank account when you deposit a check -- and not know
that it had?

bankaccount()
{
   t_record *new_record;

   if (new_record = malloc(sizeof (bank_record)))
   {
      UpdateBankAccount (from_bank_account, IS_A_WITHDRAWAL, amount);

      /* do somthing */

      InitBankRecord (new_record, amount);

      /* do somthing else */

      UpdateBankAccount (to_bank_account, IS_A_DEPOSIT, amount);
   }
   else
      FatalError ("Yo!  Get more memory!");
}

This brain-numb-if-not-dead function could bomb after withdrawing
the money from the source account but before depositing it the
destination account because malloc() didn't return a NULL and yet
InitBankRecord() caused the program to fail.  As far as I know, it
shouldn't fail simply because InitBankRecord() tries to write to
new_record -- not as far as ANSI/ISO are concerned.

>Let's compare memory usage to disk usage: cp does not (as far as I
>know) check in advance if there is enough disk space available when
>copying a file. That would make no sense, because the outcome would be
>totally worthless. Some other process could fill up the disk during
>the copy or it could free enough space to make the copy succeed, even
>if it looked impossible from the start.
>
A more appropriate example is probably mv: cp at least does not
destroy the original.  Just what happens if you try to move a file
from one file-system to another and this other fs doesn't have
enough space for the file?

I don't know what POSIX says mv should do under these
circumstances; I do know what ANSI/ISO say about
malloc()/calloc()/realloc():

   ANSI section 4.10.3 Memory Management Functions

      ...  The pointer returned points to the start (lowest byte
      address) of the allocated space.  If the sapce cannot be
      allocated, a null pointer is returned.  ...

>To be safe, every process should have a "maximum file write size"
>attribute, and the kernel should refuse to start a process if the
>available space on any of the accessible file systems was less than
>the attribute.
>
There *is* something called ulimit in this universe...
-- 
Vivek                        email address                      signature
dsclmr: any ideas above, if there, are mine.  All mine.  And an illusion.
    Oh, what a tangled web we weave, when first we practice to weave.
    Quote for the day: '

Path: bga.com!news.sprintlink.net!howland.reston.ans.net!gatech!udel!
news.mathworks.com!news.kei.com!travelers.mail.cornell.edu!tuba.cit.cornell.edu!
crux5!sl14
From: s...@crux5.cit.cornell.edu (S. Lee)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 6 Mar 1995 17:02:54 GMT
Organization: Nekomi Institute of Technology
Lines: 16
Message-ID: <3jff7u$mdr@tuba.cit.cornell.edu>
References: <820.2F5A31DA@purplet.demon.co.uk>
NNTP-Posting-Host: 128.253.232.67

In article <820.2F5A3...@purplet.demon.co.uk>,
Mike Jagdis <ja...@purplet.demon.co.uk> wrote:
>
>Then you *have* to dirty each page in the area you request yourself to 
>forcibly map them as individual, distinct pages.

[...]
>
>  Seriously, if your programmers cannot handle this they should be trained 
>or moved back to non-VM programming.

My test program dies if run out of memory while dirtying the pages.  How
do you suggest I should handle this?

s...@cornell.edu
Witty .sig under construction.

Newsgroups: comp.os.linux.development.apps
From: ja...@purplet.demon.co.uk (Mike Jagdis)
Path: bga.com!news.sprintlink.net!howland.reston.ans.net!gatech!swrinde!
pipex!peernews.demon.co.uk!purplet!jaggy
Subject: Re: Linux is 'creating' memory ?!
Organization: FidoNet node 2:252/305 - The Purple Tentacle, Reading
X-Posting-Host: purplet.demon.co.uk
Date: Tue, 7 Mar 1995 20:03:00 +0000
Message-ID: <823.2F5E17F1@purplet.demon.co.uk>
Sender: use...@demon.co.uk
Lines: 18

* In message <3jff7u$...@tuba.cit.cornell.edu>, S. Lee said:

SL> >Then you *have* to dirty each page in the area you request yourself to
SL> >forcibly map them as individual, distinct pages.

SL> [...]
SL> >
SL> >  Seriously, if your programmers cannot handle this they should be
SL> > trained or moved back to non-VM programming.

SL> My test program dies if run out of memory while dirtying the
SL> pages.  How do you suggest I should handle this?

Use a fault handler. If you need to guarantee the existence of those pages 
you have no choice.

                                Mike  

Newsgroups: comp.os.linux.development.apps
From: ja...@purplet.demon.co.uk (Mike Jagdis)
Path: bga.com!news.sprintlink.net!howland.reston.ans.net!gatech!swrinde!
pipex!peernews.demon.co.uk!purplet!jaggy
Subject: Re: Linux is 'creating' memory ?!
Organization: FidoNet node 2:252/305 - The Purple Tentacle, Reading
X-Posting-Host: purplet.demon.co.uk
Date: Wed, 8 Mar 1995 20:25:00 +0000
Message-ID: <824.2F5E17F1@purplet.demon.co.uk>
Sender: use...@demon.co.uk
Lines: 32

* In message <D51CGt....@nntpa.cb.att.com>, Vivek Kalra said:

VK> The point is that ANSI/ISO *require* malloc() to return NULL if the
VK> asked for memory cannot be allocated so the programmer can take
VK> appropriate action.

The confusion is over the word "allocate". Malloc allocates a region of the 
process memory space suitable for an object of the stated size but the OS 
does not necessarily commit memory to that space until you dirty it.

  If you are to have the OS avoid the seg fault trap for you then it *has* 
to reserve a physical page for *every* possible page of every process - it 
has a fence that specifies data size but there is no way for the OS to know 
current stack requirements (unless it is Xenix 286 with stack probes enabled 
:-) ).

  Anything less simply sweeps the problem under the carpet. If it is 
*really* a problem for you (i.e. your system is overloaded) then you are 
*still* going to get shafted!

VK> [...]
VK> This brain-numb-if-not-dead function could bomb after withdrawing
VK> the money from the source account but before depositing it the
VK> destination account because malloc() didn't return a NULL
VK> and yet InitBankRecord() caused the program to fail.

This is just poorly designed code. Page allocation is just one of many ways 
that the program could stop unexpectedly at any point for no clear reason. 
If that is a problem you have to design around it.

                                Mike  

Path: bga.com!news.sprintlink.net!howland.reston.ans.net!gatech!
newsfeed.pitt.edu!uunet!news.tele.fi!news.csc.fi!news.helsinki.fi!not-for-mail
From: torva...@cc.Helsinki.FI (Linus Torvalds)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 8 Mar 1995 11:41:23 +0200
Organization: University of Helsinki
Lines: 127
Sender: torva...@cc.helsinki.fi
Message-ID: <3jju43$gc8@klaava.helsinki.fi>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> <5715@artcom0.north.de> 
<bruce-2502950933310001@17.205.4.52> <3itc77$9lj@ninurta.fer.uni-lj.si>
NNTP-Posting-Host: klaava.helsinki.fi
Mime-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 8bit

In article <3itc77$...@ninurta.fer.uni-lj.si>,
Damjan Lango <lan...@ana.fer.uni-lj.si> wrote:
>
>Absolutely agree!
>And I can't understand how this malloc bug came so far up to 1.1.x
>It *must* be fixed before 1.2!!!

Too late... 

Anyway, it's not a simple matter of just checking how much free memory
there is: people seem to be completely unaware of how hard a problem
this actually is. 

Please, read any book about resource dead-locks etc, and you'll find
that these dead-locks *can* be resolved, but at the cost of 

 - efficiency (to be sure you can handle any dead-lock, you'll have to
   do a *lot* of work). 
 - usability (to make sure you never get any dead-lock, you have to say
   no to somebody, and you'll have to say "no" a *lot* earlier than most
   people seem to think). 

In the case of the memory handling, actually counting pages isn't that
much of an overhead (we just have one reasource, and one page is as good
as any and they don't much depend on each other, so the setup is
reasonably simple), but the usability factor is major. 

As it stands, you can add these lines to your /etc/profile:

	ulimit -d 8192
	ulimit -s 2048

and it will limit your processes to 8MB of data space, and 2MB of stack. 

And no, it doesn't guarantee anything at all, but hey, your malloc()
will return NULL. 

Personally, I consider the linux mm handling a feature, not a bug (hmm.. 
I wrote it, so that's not too surprising).  People can whine all they
want, but please at least send out patches to fix it at the same time. 
You'll find that some people definitely do *not* want to use your
patches. 

Handling malloc() together with fork() makes for problems, adding the
problem of the stack space makes it worse, and adding the complexity of
private file mappings doesn't help.  Before complaining, *please* think
about at least the following example scenarios (and you're allowed to
think up more of your own):

1) a database process maps in a database privately into memory.  The
   database is 32MB in size, but you only have 16MB free memory/swap. 
   Do you accept the mmap()?

 - The database program probably doesn't re-write the database in memory:
   it may change a few records in-core, but the number of pages it needs
   might be less than 10 (the pages it doesn't modify don't count as
   usage, as we can always throw them out when we want the memory back). 

 - on the other hand, how does the kernel *know*? It might be a program
   that just mmap()'s something and then starts writing to all the
   pages. 

2) GNU emacs (ugh) wants to start up a shell script.  In the meantime,
   GNU emacs has (as it's wont to do) grown to 17 MB, and you obviously
   don't have much memory left. Do you accept the fork?

 - emacs will happily do an exec afterwards, and will actually use only
   10 pages before that in the child process (stack, mainly).  Sure, let
   it fork(). 

 - How is the kernel supposed to know that it will fork? No way can it
   fork, as we don't have the potential 17MB of memory that now gets
   doubled. 

 - vfork() isn't an option.  Trust me on this one.  vfork is *ugly*. 
   Besides, we might actually want to run the same process concurrently. 

3) you have a nice quiescent little program that uses about 100kB of
   memory, and has been a good little boy for the last 5 minutes.  Now
   it obviously wants to do something, so it forks 10 times.  Do we
   accept it?

 - hell yes, we have 10MB free, and 10 forks of this program only uses
   about a megabyte of that.  Go ahead. 

 - hell, no: what if this nice little program just tried to make us feel
   secure, and after the forks turns into the Program From Hell (tm)? It
   might get into a recursive loop, and start eating up stack space. 
   Wheee..  Our 10MB of memory are gone in 5 seconds flat, and the OS is
   left stranded wondering what the hell hit it. 

4) You have a nice little 4MB machine, no swap, and you don't run X. 
   Most programs use shared libraries, and everybody is happy.  You
   don't use GNU emacs, you use "ed", and you have your own trusted
   small-C compiler that works well.  Does the system accept this?

 - why, of course it does. It's a tight setup, but there's no panic.

 - NO, DEFINITELY NOT.  Each shared library in place actually takes up
   600kB+ of virtual memory, and the system doesn't *know* that nothing
   starts using these pages in all the processes alive.  Now, with just
   10 processes (a small make, and all the deamons), the kernel is
   actually juggling more than 6MB of virtual memory in the shared
   libraries alone, although only a fraction of that is actually in use
   at that time. 

It's easy to make malloc() return NULL under DOS: you just see if you
have any of the 640kB free, and if you have, it's ok. 

It's easy to make malloc() return NULL under Windows: there is no fork()
system call, and nobody expects the machine to stay up anyway, so who
cares? When you say "I wrote a program that crashed Windows", people
just stare at you blankly and say "Hey, I got those with the system,
*for free*". 

It's also easy to make malloc() return NULL under some trusted large
UNIX server: people running those are /expected/ to have an absolute
minimum of 256MB of RAM, and double that of swap, so we really can make
sure that any emacs that wants to fork() must have the memory available
(if you're so short of memory that 17MB is even close to tight, it's ok
to say that emacs can't fork). 

It's *not* easy to say no to malloc() when you have 8-32MB of memory,
and about as much swap-space, and fork/mmap/etc works.  You can do it,
sure, but you'd prefer a system that doesn't. 

		Linus

Path: nntp.gmd.de!news.rwth-aachen.de!newsserver.rrzn.uni-hannover.de!
aix11.hrz.uni-oldenburg.de!nordwest.pop.de!informatik.uni-bremen.de!
marvin.pc-labor.uni-bremen.de!news.uni-stuttgart.de!rz.uni-karlsruhe.de!
xlink.net!howland.reston.ans.net!vixen.cso.uiuc.edu!uwm.edu!news.alpha.net!
news.mathworks.com!usenet.eel.ufl.edu!chaos.dac.neu.edu!narnia.ccs.neu.edu!
narnia.ccs.neu.edu!albert
From: alb...@snowdon.ccs.neu.edu (Albert Cahalan)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 10 Mar 1995 07:04:57 GMT
Organization: Northeastern University, College of Computer Science
Lines: 20
Message-ID: <ALBERT.95Mar10020458@snowdon.ccs.neu.edu>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> <5715@artcom0.north.de>
	<bruce-2502950933310001@17.205.4.52> <3itc77$9lj@ninurta.fer.uni-lj.si>
	<3jju43$gc8@klaava.helsinki.fi>
NNTP-Posting-Host: snowdon.ccs.neu.edu
In-reply-to: torvalds@cc.Helsinki.FI's message of 8 Mar 1995 11:41:23 +0200

>>>>> "LT" == Linus Torvalds <torva...@cc.Helsinki.FI> writes:


LT> 2) GNU emacs (ugh) wants to start up a shell script.  In the meantime, GNU
LT> emacs has (as it's wont to do) grown to 17 MB, and you obviously don't
LT> have much memory left. Do you accept the fork?

LT>  - emacs will happily do an exec afterwards, and will actually use only 10
LT> pages before that in the child process (stack, mainly).  Sure, let it
LT> fork().

LT>  - How is the kernel supposed to know that it will fork? No way can it
LT> fork, as we don't have the potential 17MB of memory that now gets doubled.

Why must fork() and exec() be used?  It would be better to have a spawn()
that would produce the same result by a different method.
--

Albert Cahalan
alb...@ccs.neu.edu

Newsgroups: comp.os.linux.development.apps
Path: nntp.gmd.de!news.rwth-aachen.de!news.rhrz.uni-bonn.de!
news.uni-stuttgart.de!news.belwue.de!delos.stgt.sub.org!delos.stgt.sub.org!
news.maz.net!pipex!howland.reston.ans.net!news.moneng.mei.com!uwm.edu!
fnnews.fnal.gov!gw1.att.com!nntpa!not-for-mail
From: v...@rhea.cnet.att.com (Vivek Kalra)
Subject: Re: Linux is 'creating' memory ?!
Message-ID: <D58qL3.K83@nntpa.cb.att.com>
Sender: n...@nntpa.cb.att.com (Netnews Administration)
Nntp-Posting-Host: rhea.cnet.att.com
Organization: AT&T
References: <824.2F5E17F1@purplet.demon.co.uk>
Date: Fri, 10 Mar 1995 20:07:51 GMT
Lines: 43

In article <824.2F5E1...@purplet.demon.co.uk>,
Mike Jagdis <ja...@purplet.demon.co.uk> wrote:
>* In message <D51CGt....@nntpa.cb.att.com>, Vivek Kalra said:
>
>VK> The point is that ANSI/ISO *require* malloc() to return NULL if the
>VK> asked for memory cannot be allocated so the programmer can take
>VK> appropriate action.
>
>The confusion is over the word "allocate". Malloc allocates a region of the
>process memory space suitable for an object of the stated size but the OS
>does not necessarily commit memory to that space until you dirty it.
>
I agree that the confusion is over the word allocate: What you are
saying sounds to me not malloc() -- *m*emory *alloc*ation, mind you
-- but a mere *promise* to *try* to allocate memory when actually
used.  If malloc() returning a non-NULL had actually *allocated*
the memory it said it had, it would never fail when said memory was
actually used.

>
>VK> [...]
>VK> This brain-numb-if-not-dead function could bomb after withdrawing
>VK> the money from the source account but before depositing it the
>VK> destination account because malloc() didn't return a NULL
>VK> and yet InitBankRecord() caused the program to fail.
>
>This is just poorly designed code. Page allocation is just one of many ways
>that the program could stop unexpectedly at any point for no clear reason.
>If that is a problem you have to design around it.
>
As I said, not the smartest of codes lying around in a safe deposit
box.  However, the point was that it was perfectly correct as far
as the ANSI/ISO spec is concerned -- and yet it could fail simply
because it trusted the return value of malloc().  Not A Good
Thing (tm), if you ask me.  In such a world, we might as well
forget that the return value of malloc() has any meaning
whatsoever.  And I, for one, am not going to be the one to say so
in comp.lang.c or comp.std.c...  :-)
-- 
Vivek                        email address                      signature
dsclmr: any ideas above, if there, are mine.  All mine.  And an illusion.
    Oh, what a tangled web we weave, when first we practice to weave.
    Quote for the day: '

Path: bga.com!news.sprintlink.net!pipex!sunic!sunic.sunet.se!news.funet.fi!
hydra.Helsinki.FI!news.helsinki.fi!not-for-mail
From: torva...@cc.Helsinki.FI (Linus Torvalds)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 13 Mar 1995 08:20:33 +0200
Organization: University of Helsinki
Lines: 19
Sender: torva...@cc.helsinki.fi
Message-ID: <3k0o7h$b11@klaava.helsinki.fi>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<3itc77$9lj@ninurta.fer.uni-lj.si> <3jju43$gc8@klaava.helsinki.fi> 
<ALBERT.95Mar10020458@snowdon.ccs.neu.edu>
NNTP-Posting-Host: klaava.helsinki.fi
Mime-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 8bit

In article <ALBERT.95Mar10020...@snowdon.ccs.neu.edu>,
Albert Cahalan <alb...@snowdon.ccs.neu.edu> wrote:
>
>Why must fork() and exec() be used?  It would be better to have a spawn()
>that would produce the same result by a different method.

"spawn()" is the simpler setup, but the fork()/execve() cycle is
actually one thing that makes unix so powerful: it's simply so much more
flexible.  A simple "spawn()" doesn't allow any set-up in the context of
the new process: it just starts the new image.  While that is sometimes
acceptable, it often isn't.. 

Most fork()/exec() setups don't actually do the exec() right after the
fork(), at least not in non-trivial setups.  Instead, the child process
does various cleanups or administative stuff before actually doing the
exec(), like changing tty process groups, closing unnecessary (for the
child) file descriptors, getting rid of excess privileges etc. 

		Linus

Path: bga.com!news.sprintlink.net!howland.reston.ans.net!vixen.cso.uiuc.edu!
peltz
From: pe...@cerl.uiuc.edu (Steve Peltz)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 14 Mar 1995 07:54:52 GMT
Organization: University Communications, Inc.
Lines: 83
Message-ID: <3k3i4c$341@vixen.cso.uiuc.edu>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<bruce-2502950933310001@17.205.4.52> <3itc77$9lj@ninurta.fer.uni-lj.si> 
<3jju43$gc8@klaava.helsinki.fi>
NNTP-Posting-Host: medusa.cerl.uiuc.edu

In article <3jju43$...@klaava.helsinki.fi>,
Linus Torvalds <torva...@cc.Helsinki.FI> wrote:
>Handling malloc() together with fork() makes for problems, adding the
>problem of the stack space makes it worse, and adding the complexity of
>private file mappings doesn't help.  Before complaining, *please* think
>about at least the following example scenarios (and you're allowed to
>think up more of your own):
>
>1) a database process maps in a database privately into memory.  The
>   database is 32MB in size, but you only have 16MB free memory/swap. 
>   Do you accept the mmap()?

Doesn't an mmap'ed segment get swapped to the file itself (other than
ANON)? Why would it need to reserve swap space?

>
>2) GNU emacs (ugh) wants to start up a shell script.  In the meantime,
>   GNU emacs has (as it's wont to do) grown to 17 MB, and you obviously
>   don't have much memory left. Do you accept the fork?

So you have 17MB of swap space that you have to have free for a millisecond
in order to fork a process from a huge process. Is that such a problem? It
will be freed up almost immediately.

> - vfork() isn't an option.  Trust me on this one.  vfork is *ugly*. 
>   Besides, we might actually want to run the same process concurrently. 

Actually, making the only difference between vfork and fork be whether
swap space gets committed would be a pretty good solution (and don't
worry about the other *ugly* parts of vfork, since it isn't implemented
anyway you aren't breaking anything that isn't already broken). However,
I am loathe to suggest actually making a use for vfork, as people would
then use it, thus creating more inconsistency in the world.

>3) you have a nice quiescent little program that uses about 100kB of
>   memory, and has been a good little boy for the last 5 minutes.  Now
>   it obviously wants to do something, so it forks 10 times.  Do we
>   accept it?

Yes. In any scenario. I don't understand how this applies to the current
problem. If one of the forked processes is unable to allocate more memory
to do something, then it fails; if it is doing malloc, then it can detect
the failure by the result, rather than getting a segment violation.

>4) You have a nice little 4MB machine, no swap, and you don't run X. 
>   Most programs use shared libraries, and everybody is happy.  You
>   don't use GNU emacs, you use "ed", and you have your own trusted
>   small-C compiler that works well.  Does the system accept this?

Sure, if there's no swap, there's nothing to over-commit.

> - NO, DEFINITELY NOT.  Each shared library in place actually takes up
>   600kB+ of virtual memory, and the system doesn't *know* that nothing
>   starts using these pages in all the processes alive.  Now, with just
>   10 processes (a small make, and all the deamons), the kernel is
>   actually juggling more than 6MB of virtual memory in the shared
>   libraries alone, although only a fraction of that is actually in use
>   at that time. 

Shared libraries should not be writeable. Are you saying they are, and are
C-O-W?

Whenever a writeable non-mmap'ed non-shared segment is allocated to the
address space (whether by sbrk, mmap with ANON, or fork), each such page
needs to have space reserved out of swap. Linus, you talk about deadlock -
deadlock can only occur when you actually try to prevent errors caused by
overcommitment of resources. Causing an error due to such overcommitment
is not what is usually meant by deadlock avoidance. Deadlock is what might
happen if a process were to be suspended when it tries to access memory that
is not actually available after it has been allocated (and, since Unix
doesn't have any sort of resource utilization declarations to be declared
by a program, deadlock avoidance can not be done at the system level).

For those processes that really truly want lazy/sparse memory allocation,
do it using the file system and mmap (Linux does support sparse files,
and allocation of blocks on write to such a file that is mmap'ed, right?).
The semantics of mmap allow giving segment faults if the underlying file
system runs out of space, right? Note: ANON should come directly out of
swap and be reserved - however, if it is shareable, it does not need to
be re-commited on fork, as it will not be re-allocated on write.
-- 
Steve Peltz
tric...@uiuc.edu

Newsgroups: comp.os.linux.development.apps
From: ja...@purplet.demon.co.uk (Mike Jagdis)
Path: bga.com!news.sprintlink.net!uunet!in1.uu.net!pipex!
peernews.demon.co.uk!purplet!jaggy
Subject: Re: Linux is 'creating' memory ?!
Organization: FidoNet node 2:252/305 - The Purple Tentacle, Reading
X-Posting-Host: purplet.demon.co.uk
Date: Tue, 14 Mar 1995 22:11:00 +0000
Message-ID: <828.2F6617A7@purplet.demon.co.uk>
Sender: use...@demon.co.uk
Lines: 45

* In message <D58qL3....@nntpa.cb.att.com>, Vivek Kalra said:

VK> I agree that the confusion is over the word allocate: What you are
VK> saying sounds to me not malloc() -- *m*emory *alloc*ation, mind you
VK> -- but a mere *promise* to *try* to allocate memory when actually
VK> used.  If malloc() returning a non-NULL had actually *allocated*
VK> the memory it said it had, it would never fail when said
VK> memory was actually used.

That's right. Malloc allows a process to define a region of it's virtual 
memory space. It doesn't necessarily say anything about the underlying OS 
behaviour. It *can't* without defining *what* is actually underneath.

VK> As I said, not the smartest of codes lying around in a safe deposit
VK> box.  However, the point was that it was perfectly correct as far
VK> as the ANSI/ISO spec is concerned -- and yet it could fail simply
VK> because it trusted the return value of malloc().

If you accept that the code is correct and recognise that it can also still 
fail you must accept that necessary checks are lacking. *However* on a 
multiprocessing, virtual memory system you *cannot* rely on the OS making 
those checks - the OS does not know, for instance, stack requirements of any 
process and if a process says what it thinks it needs it may be wrong or 
even just plain lying. If you *need* those checks *you* must force them at 
the appropriate time. There is no other (workable) solution.

VK> In such a world, we might as well
VK> forget that the return value of malloc() has any meaning
VK> whatsoever.  And I, for one, am not going to be the one to
VK> say so in comp.lang.c or comp.std.c...  :-)

Malloc() has a return value for a damn good reason. Different parts of your 
code may need to malloc() and free() chunks of memory for various reasons. 
They do not necessarily know about each other (they may be library 
functions). They do not know what areas of memory are in use by something 
else at the time. Malloc(), realloc, calloc(), free() etc. coordinate this. 
This is even clearer when you consider an OOP like C++.

  The return value of malloc() has to do with process (virtual) address 
space. It does not address the checks you require at all. If you need those 
checks you must implement them yourself in an environment dependent manner. 
Bottom line.

                                Mike  

Path: bga.com!news.sprintlink.net!howland.reston.ans.net!gatech!
newsfeed.pitt.edu!uunet!newsfeed.ACO.net!wsrcom.wsr.ac.at!wsrdb!hjp
From: h...@wsrdb.wsr.ac.at (Peter Holzer)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 15 Mar 1995 17:29:56 GMT
Organization: WSR, Vienna, Austria
Lines: 43
Message-ID: <3k786k$fc9@wsrcom.wsr.ac.at>
References: <828.2F6617A7@purplet.demon.co.uk>
NNTP-Posting-Host: wsrdb.wsr.ac.at

ja...@purplet.demon.co.uk (Mike Jagdis) writes:

>* In message <D58qL3....@nntpa.cb.att.com>, Vivek Kalra said:

>VK> I agree that the confusion is over the word allocate: What you are
>VK> saying sounds to me not malloc() -- *m*emory *alloc*ation, mind you
>VK> -- but a mere *promise* to *try* to allocate memory when actually
>VK> used.  If malloc() returning a non-NULL had actually *allocated*
>VK> the memory it said it had, it would never fail when said
>VK> memory was actually used.

>That's right. Malloc allows a process to define a region of it's virtual 
>memory space. It doesn't necessarily say anything about the underlying OS 
>behaviour. It *can't* without defining *what* is actually underneath.

But it does and and it can. The standard says that malloc returns
a pointer to a region of memory SUITABLE FOR USE (interestingly
enough everybody quoting that standard in this thread omitted that
phrase). Now, I don't know how you define `use', but for me it means
dereferencing the pointer without any segmentation faults, and
the comp.std.c folks seem to agree with me (this has already been
discussed).

>VK> In such a world, we might as well
>VK> forget that the return value of malloc() has any meaning
>VK> whatsoever.  And I, for one, am not going to be the one to
>VK> say so in comp.lang.c or comp.std.c...  :-)

>Malloc() has a return value for a damn good reason. Different parts of your 
>code may need to malloc() and free() chunks of memory for various reasons. 

That's not what Vivek meant. Of course the return value of malloc is
needed. You couldn't even access the memory malloc allocated without it
(how do you know where it is?). What Vivek meant was we could forget
that malloc can return NULL and that that return value has a special 
meaning.

	hp
--
   _  | Peter Holzer | h...@vmars.tuwien.ac.at | h...@wsr.ac.at
|_|_) |------------------------------------------------------
| |   |  ...and it's finished!  It only has to be written.
__/   |         -- Karl Lehenbauer

Path: bga.com!news.sprintlink.net!hookup!newshost.marcam.com!uunet!
in1.uu.net!chronos.synopsys.com!news.synopsys.com!jbuck
From: jb...@synopsys.com (Joe Buck)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 16 Mar 1995 02:17:33 GMT
Organization: Synopsys Inc., Mountain View, CA 94043-4033
Lines: 160
Message-ID: <3k873t$pm9@hermes.synopsys.com>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<bruce-2502950933310001@17.205.4.52> <3itc77$9lj@ninurta.fer.uni-lj.si> 
<3jju43$gc8@klaava.helsinki.fi>
NNTP-Posting-Host: deerslayer.synopsys.com

torva...@cc.Helsinki.FI (Linus Torvalds) writes:
>In the case of the memory handling, actually counting pages isn't that
>much of an overhead (we just have one reasource, and one page is as good
>as any and they don't much depend on each other, so the setup is
>reasonably simple), but the usability factor is major. 

I agree that there is a penalty to be paid for implementing a reliable
malloc, and some may not wish to pay this penalty.  Those who never check
the return value of malloc in their programs have no right to complain
about the behavior.  But, for example, many digital logic optimization
algorithms have the potential to use very large amounts of space.
Typically these programs are written to grab as much memory as they can
from the O/S, and if the required amount of memory cannot be delivered,
less space-intensive but more time-intensive algorithms can be used
instead.  But you need to have malloc return 0 when if fails, and you
can't tolerate your program suddenly bombing because it doesn't really own
its memory, after 30 hours of crunching.  You want to grab as much memory
as you can and have all that you get.

>As it stands, you can add these lines to your /etc/profile:
>
>	ulimit -d 8192
>	ulimit -s 2048
>
>and it will limit your processes to 8MB of data space, and 2MB of stack. 
>
>And no, it doesn't guarantee anything at all, but hey, your malloc()
>will return NULL. 

There's little point to doing this, as there is no guarantee.  The only
reason I'd want malloc to return NULL is so that I could believe it when
it did *not* return NULL.  ulimit might be a usable technique for batch mode
operation where only one large process is going to run.

>Personally, I consider the linux mm handling a feature, not a bug (hmm.. 
>I wrote it, so that's not too surprising).  People can whine all they
>want, but please at least send out patches to fix it at the same time. 
>You'll find that some people definitely do *not* want to use your
>patches. 

I agree: if you suddenly mandate that every page has guaranteed backing
store at the time of its allocation, those with small-memory, small-disk
systems would be unhappy.  But it seems to me that there is an
intermediate solution: it should be possible to have guaranteed backing
store on a per-process basis.  It would be possible to turn it on by
default for all processes (you'd want to do this if you want any hope
of POSIX certification, and those trying to pretend there's any ambiguity
about this are fooling themselves), off for all processes, or somewhere
in between.

>Handling malloc() together with fork() makes for problems, adding the
>problem of the stack space makes it worse, and adding the complexity of
>private file mappings doesn't help.

The usual solution for Unix systems is that malloc() is guaranteed to
return storage that belongs to the process; the process is only granted a
certain amount of guaranteed stack space and excessive stack use may cause
a fault (though once a stack page has been safely accessed it will not be
later stolen away by the system), private file mappings are not an issue,
and text sections of binaries and shared libraries don't count against the
total because they have their own backing store.

>  Before complaining, *please* think
>about at least the following example scenarios (and you're allowed to
>think up more of your own):

>1) a database process maps in a database privately into memory.  The
>   database is 32MB in size, but you only have 16MB free memory/swap. 
>   Do you accept the mmap()?

Yes.  Either it is a read-only mmap(), in which case backing store isn't
an issue, or else dirty pages go back to the file you're mapping in, so
again it is not an issue.  If you map in a file, the file itself provides
the backing store.

>2) GNU emacs (ugh) wants to start up a shell script.  In the meantime,
>   GNU emacs has (as it's wont to do) grown to 17 MB, and you obviously
>   don't have much memory left. Do you accept the fork?

This is a case where people will be pissed off if Linux does the "right
thing" as they'll need more swap space.  Some thoughts:

Let's assume that the kernel keeps a count of available pages (counting
physical memory plus swap) and that at the time where Emacs wants to fork,
there aren't enough pages.  We could say that if the Emacs process has the
"reliable backing store" flag on, and the pages can't be stolen from
processes that don't have this flag set, we don't allow the fork.  There's
another kludge I just thought of: a mode where the "reliable backing
store" bit gets turned off in the child process on a fork, and turned on again
during an exec.  The only risk is that the child process dirties some
pages between its birth and the exec, there might not be backing store for
these.  But you'd still get an increase in reliability.

>3) you have a nice quiescent little program that uses about 100kB of
>   memory, and has been a good little boy for the last 5 minutes.  Now
>   it obviously wants to do something, so it forks 10 times.  Do we
>   accept it?

On each fork, there is either backing memory or there is not.

> - hell yes, we have 10MB free, and 10 forks of this program only uses
>   about a megabyte of that.  Go ahead. 

Right.

> - hell, no: what if this nice little program just tried to make us feel
>   secure, and after the forks turns into the Program From Hell (tm)? It
>   might get into a recursive loop, and start eating up stack space. 
>   Wheee..  Our 10MB of memory are gone in 5 seconds flat, and the OS is
>   left stranded wondering what the hell hit it. 

The programs would only be guaranteed some default amount of stack space.
No promises if they exceed it.

>4) You have a nice little 4MB machine, no swap, and you don't run X. 
>   Most programs use shared libraries, and everybody is happy.  You
>   don't use GNU emacs, you use "ed", and you have your own trusted
>   small-C compiler that works well.  Does the system accept this?
>
> - why, of course it does. It's a tight setup, but there's no panic.

Exactly.

> - NO, DEFINITELY NOT.  Each shared library in place actually takes up
>   600kB+ of virtual memory, and the system doesn't *know* that nothing
>   starts using these pages in all the processes alive.

But the shared libraries are paged against the libraries themselves, right?
Why do you need backing swap for them.  I'm concerned that you've ignored
other sources of backing store in the case of mmap and shared libraries.

>It's *not* easy to say no to malloc() when you have 8-32MB of memory,
>and about as much swap-space, and fork/mmap/etc works.  You can do it,
>sure, but you'd prefer a system that doesn't. 

I'd prefer a system where I could *choose*.  If I know that a program
never bothers to check malloc and can't do anything useful with the
information other than die, it doesn't much matter.  If I have a program
that's going to crunch for several days, uses lots of memory, and I don't
want it to die because my wife logged in and fired up the X server, I'd
like to be able to get some guarantees for it.

For those who've been bringing up the halting theorem: many real programs
have little or no recursion, and it's easy to determine an upper bound on
the stack depth.

Disk is now less than 50 cents a megabyte (and that's measured in
pitifully weak US currency).  It's no longer nearly as expensive as it
used to be to allocate a reasonable amount of backing store.  I recognize
that some people are stuck with small systems, but there's something to be
said for avoiding random crashes.






-- 
-- Joe Buck 	<jb...@synopsys.com>	(not speaking for Synopsys, Inc)
Phone: +1 415 694 1729

Path: bga.com!news.sprintlink.net!hookup!newshost.marcam.com!uunet!
news.tele.fi!news.csc.fi!news.helsinki.fi!not-for-mail
From: torva...@cc.Helsinki.FI (Linus Torvalds)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 16 Mar 1995 08:08:16 +0200
Organization: University of Helsinki
Lines: 149
Sender: torva...@cc.helsinki.fi
Message-ID: <3k8kkg$1us@klaava.helsinki.fi>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<3itc77$9lj@ninurta.fer.uni-lj.si> <3jju43$gc8@klaava.helsinki.fi> 
<3k3i4c$341@vixen.cso.uiuc.edu>
NNTP-Posting-Host: klaava.helsinki.fi
Mime-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 8bit

In article <3k3i4c$...@vixen.cso.uiuc.edu>,
Steve Peltz <pe...@cerl.uiuc.edu> wrote:
>
>Doesn't an mmap'ed segment get swapped to the file itself (other than
>ANON)? Why would it need to reserve swap space?

Check out MAP_PRIVATE, which is actually the one that is used a lot more
than MAP_SHARED (and is the only form fully implemented under linux,
just for that reason). 

>>2) GNU emacs (ugh) wants to start up a shell script.  In the meantime,
>>   GNU emacs has (as it's wont to do) grown to 17 MB, and you obviously
>>   don't have much memory left. Do you accept the fork?
>
>So you have 17MB of swap space that you have to have free for a millisecond
>in order to fork a process from a huge process. Is that such a problem? It
>will be freed up almost immediately.

Are all people writing to this thread so arrogant?

"is it so hard to do?" "is 17MB of swapspace for a millisecond a
problem?" "Why can DOS do it and Linux not do it?" "Use vfork() instead
of fork()" etc etc..

It IS damned hard to do.

Using 17MB of swap-space is HORRIBLE on a PC.  I have around 500MB of
disk on my two linux-machines, and 17MB of that is noticeable.  Others
have *much* less. 

The next time somebody tells me "harddisks sell for 50c/MB", I'll
scream.  LINUX WAS NOT MEANT TO BE ANOTHER WINDOWS NT! YOU AREN'T
SUPPOSED TO NEED EXTRA RESOURCES TO RUN IT. 

People, PLEASE wake up!

It's NOT a good thing to require lots of memory and lots of disk. 

It IS a good thing to take full advantage of the available resources. 

Requiring swap backingstore by definition doesn't take full advantage of
your system resources. 

You pay the price, of course: linux uses your machine more efficiently,
but if you're running low on memory it means that you're walking on the
edge.  That's something I accept.  Take the good with the bad: there is
no free lunch. 

And please, don't WHINE. 

>> - vfork() isn't an option.  Trust me on this one.  vfork is *ugly*. 
>>   Besides, we might actually want to run the same process concurrently. 
>
>Actually, making the only difference between vfork and fork be whether
>swap space gets committed would be a pretty good solution (and don't
>worry about the other *ugly* parts of vfork, since it isn't implemented
>anyway you aren't breaking anything that isn't already broken). However,
>I am loathe to suggest actually making a use for vfork, as people would
>then use it, thus creating more inconsistency in the world.

Make up your mind: do you want to be safe, or don't you?

>>3) you have a nice quiescent little program that uses about 100kB of
>>   memory, and has been a good little boy for the last 5 minutes.  Now
>>   it obviously wants to do something, so it forks 10 times.  Do we
>>   accept it?
>
>Yes. In any scenario. I don't understand how this applies to the current
>problem. If one of the forked processes is unable to allocate more memory
>to do something, then it fails; if it is doing malloc, then it can detect
>the failure by the result, rather than getting a segment violation.

It *is* the current problem.

Remember, we aren't talking about 1 process, here.  If we were, the
problem would be as simple as it is under DOS, and I could *easily* make
linux return NULL on any memory allocation request that doesn't fit in
the current VM. 

However, we have a dynamic system running tens of active programs, some
of which have more importance for the user than others, but the kernel
doesn't know that and has no way of knowing.  Oh yes, you could try to
analyze the system, but then you'd have something that is slower than
Windows NT.. 

>>4) You have a nice little 4MB machine, no swap, and you don't run X. 
>>   Most programs use shared libraries, and everybody is happy.  You
>>   don't use GNU emacs, you use "ed", and you have your own trusted
>>   small-C compiler that works well.  Does the system accept this?
>
>Sure, if there's no swap, there's nothing to over-commit.

What? There's physical memory, and you sure as hell are overcommitting
that.  You're sharing pages left and right, which is why the system
still works perfectly well for you.  But those pages are mostly COW, so
you're really living on borrowed memory.  But it *works*, which is the
point. 

>> - NO, DEFINITELY NOT.  Each shared library in place actually takes up
>>   600kB+ of virtual memory, and the system doesn't *know* that nothing
>>   starts using these pages in all the processes alive.  Now, with just
>>   10 processes (a small make, and all the deamons), the kernel is
>>   actually juggling more than 6MB of virtual memory in the shared
>>   libraries alone, although only a fraction of that is actually in use
>>   at that time. 
>
>Shared libraries should not be writeable. Are you saying they are, and are
>C-O-W?

Yup, they're COW.  Dynamic linking etc means that you have to write at
least to the jump tables, and possibly do other fixups as well.  On the
other hand, most programs *won't* do any fixups, because they use the
standard C libraries and don't redefine "malloc()", for example.  So you
want to be able to share the pages, but on the other hand you want to
have the possibility of modifying them on a per-process basis.  COW. 

>Whenever a writeable non-mmap'ed non-shared segment is allocated to the
>address space (whether by sbrk, mmap with ANON, or fork), each such page
>needs to have space reserved out of swap. Linus, you talk about deadlock -
>deadlock can only occur when you actually try to prevent errors caused by
>overcommitment of resources.

Right. And we are overcommitting our resources, and rather heavily at
that. 

Why? Simply because it results in a usable system, which wouldn't be
usable otherwise. 

>			 Causing an error due to such overcommitment
>is not what is usually meant by deadlock avoidance. Deadlock is what might
>happen if a process were to be suspended when it tries to access memory that
>is not actually available after it has been allocated (and, since Unix
>doesn't have any sort of resource utilization declarations to be declared
>by a program, deadlock avoidance can not be done at the system level).

I have some dim idea what deadlock means, and what linux gets into when
running low on memory IS a form of dead-lock, sometimes called
"livelock".  The processes aren't suspended per se, but are in an
eternal loop fighting for resources (memory, in this case).  The kernel
tries to resolve it, and eventually will probably kill one of the
programs, but yes, it's a deadlock situation once we've overextended out
VM. 

It's easy to say "don't overextend", but what I'm trying to make clear
is that it's not even *close* to easy to actually avoid it.  And it's
impossible to avoid it if you want to keep the good features of the
linux memory management. 

			Linus

Path: bga.com!news.sprintlink.net!hookup!news.mathworks.com!panix!not-for-mail
From: stimp...@panix.com (S. Joel Katz)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 16 Mar 1995 02:18:09 -0500
Organization: PANIX Public Access Internet and Unix, NYC
Lines: 35
Message-ID: <3k8onh$aji@panix3.panix.com>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<bruce-2502950933310001@17.205.4.52> <3itc77$9lj@ninurta.fer.uni-lj.si> 
<3jju43$gc8@klaava.helsinki.fi> <3k873t$pm9@hermes.synopsys.com>
NNTP-Posting-Host: panix3.panix.com

In <3k873t$...@hermes.synopsys.com> jb...@synopsys.com (Joe Buck) writes:

>torva...@cc.Helsinki.FI (Linus Torvalds) writes:
>>In the case of the memory handling, actually counting pages isn't that
>>much of an overhead (we just have one reasource, and one page is as good
>>as any and they don't much depend on each other, so the setup is
>>reasonably simple), but the usability factor is major. 

>I agree that there is a penalty to be paid for implementing a reliable
>malloc, and some may not wish to pay this penalty.  Those who never check
>the return value of malloc in their programs have no right to complain
>about the behavior.  But, for example, many digital logic optimization
>algorithms have the potential to use very large amounts of space.
>Typically these programs are written to grab as much memory as they can
>from the O/S, and if the required amount of memory cannot be delivered,
>less space-intensive but more time-intensive algorithms can be used
>instead.  But you need to have malloc return 0 when if fails, and you
>can't tolerate your program suddenly bombing because it doesn't really own
>its memory, after 30 hours of crunching.  You want to grab as much memory
>as you can and have all that you get.

	These programs are broken. No process should ever keep grabbing 
memory until it fails under a multi-user OS. They should, at a minimum, 
allow you to specify some maximum memory consumption.

	It is absurd to fix efficient OS behavior to fix broken 
applications. I don't buy this argument.

	No, if you want to use this to argue that there should be some 
RLIMIT that causes malloc to fail, ...

-- 

S. Joel Katz           Information on Objectivism, Linux, 8031s, and atheism
Stimp...@Panix.COM     is available at http://www.panix.com/~stimpson/

Path: bga.com!news.sprintlink.net!howland.reston.ans.net!Germany.EU.net!
nntp.gmd.de!news.rwth-aachen.de!news.rhrz.uni-bonn.de!news.uni-stuttgart.de!
rz.uni-karlsruhe.de!i13a6.ira.uka.de!rogina
From: rog...@ira.uka.de (Ivica Rogina)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 17 Mar 1995 11:42:14 GMT
Organization: OrgFreeware
Lines: 14
Sender: rog...@i13a6.ira.uka.de (Ivica Rogina)
Distribution: world
Message-ID: <3kbsim$3mi@nz12.rz.uni-karlsruhe.de>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<bruce-2502950933310001@17.205.4.52> <3itc77$9lj@ninurta.fer.uni-lj.si> 
<3jju43$gc8@klaava.helsinki.fi> <3k873t$pm9@hermes.synopsys.com> 
<3k8onh$aji@panix3.panix.com>
Reply-To: rog...@ira.uka.de (Ivica Rogina)
NNTP-Posting-Host: i13a6.ira.uka.de
Mime-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit


stimp...@panix.com (S. Joel Katz) wrote:

> 	These programs are broken. No process should ever keep grabbing 
> memory until it fails under a multi-user OS. They should, at a minimum, 
> allow you to specify some maximum memory consumption.

No, these programs are not broken. What do you mean "fail". The programs
don't fail, the memory allocation does, but that's what they want.
These programs are perfectly ANSI compliant and do nothing wrong.
And, what does that have to do with a maximum per-process memory consumption.
Thats what (u)limit is for, this is completely orthogonal to sbrk().

Ivica

Newsgroups: comp.os.linux.development.apps
From: ja...@purplet.demon.co.uk (Mike Jagdis)
Path: bga.com!news.sprintlink.net!pipex!peernews.demon.co.uk!purplet!jaggy
Subject: Re: Linux is 'creating' memory ?!
Organization: FidoNet node 2:252/305 - The Purple Tentacle, Reading
X-Posting-Host: purplet.demon.co.uk
Date: Fri, 17 Mar 1995 22:26:00 +0000
Message-ID: <831.2F6A10B3@purplet.demon.co.uk>
Sender: use...@demon.co.uk
Lines: 49

* In message <3k786k$...@wsrcom.wsr.ac.at>, Peter Holzer said:

PH> But it does and and it can. The standard says that malloc returns
PH> a pointer to a region of memory SUITABLE FOR USE
PH> (interestingly
PH> enough everybody quoting that standard in this thread omitted that
PH> phrase). Now, I don't know how you define `use', but for me it means
PH> dereferencing the pointer without any segmentation faults, and
PH> the comp.std.c folks seem to agree with me (this has already
PH> been discussed).

I don't know how you define "region of memory" either :-). Does it mean the 
virtual system the process is running in or does it imply definitions and 
limitations on the underlying system?

  The thing people seem to be complaining about is that their programs may 
receive a signal unexpectedly when the number of free physical pages (RAM + 
swap) is *extremely* tight.

  But these conditions cause such behaviour anyway. Consider: you do a 
(safe) malloc. You read data from, say, a network connection which blocks. 
In the meantime another process touches an automatic variable causing 
another stack page to be allocated. There is only one page left.

  Has the safe malloc claimed it? If so the process extending its stack 
dies. If not the process that did the malloc dies (unless the other process 
is about to free memory, exit or similar...). Either way you have reached 
starvation and something has to give.

  Doing an early commit of pages simply means you reach that starvation 
point sooner and have to deal with it even though transient memory usage may 
have meant that you could have carried on. Linux simply tries longer and 
harder to do what you asked it to.

  If you can come up with a provably correct method of avoiding the problem 
then by all means add it to the kernel - but as an option, please! Someone 
else in this thread said that he had no problem with early commit of pages 
since disk space was cheap. I, however, don't have space on either of my 
ancient ESDI drives and can't afford to upgrade (new motherboard and memory 
first at the very least). Without Linux's lazy allocation I would never have 
been able to do any useful work - never mind develop iBCS!

                                Mike

P.S. Saying that early commit is needed to allow available memory to be 
probed with malloc(BIG_NUMBER) is silly. If you have early commit then 
trying to grab excessive chunks is quite likely to cause other programs to 
fail due to starvation!  

Newsgroups: comp.os.linux.development.apps
Path: bga.com!news.sprintlink.net!pipex!uunet!in1.uu.net!news.erinet.com!
netcom.com!jqb
From: j...@netcom.com (Jim Balter)
Subject: Re: Linux is 'creating' memory ?!
Message-ID: <jqbD5pK8C.FM5@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<3k8onh$aji@panix3.panix.com> <3kbsim$3mi@nz12.rz.uni-karlsruh 
<3kdnq1$equ@panix3.panix.com>
Date: Sun, 19 Mar 1995 22:09:48 GMT
Lines: 37
Sender: j...@netcom23.netcom.com

In article <3kdnq1$...@panix3.panix.com>,
S. Joel Katz <stimp...@panix.com> wrote:
>In <3kbsim$...@nz12.rz.uni-karlsruhe.de> rog...@ira.uka.de (Ivica Rogina) writes:
>
>
>>stimp...@panix.com (S. Joel Katz) wrote:
>
>>> 	These programs are broken. No process should ever keep grabbing 
>>> memory until it fails under a multi-user OS. They should, at a minimum, 
>>> allow you to specify some maximum memory consumption.
>
>>No, these programs are not broken. What do you mean "fail". The programs
>>don't fail, the memory allocation does, but that's what they want.
>>These programs are perfectly ANSI compliant and do nothing wrong.
>>And, what does that have to do with a maximum per-process memory consumption.
>>Thats what (u)limit is for, this is completely orthogonal to sbrk().
>
>	Fail means get a NULL return in this context. Under ANY 
>multi-user OS, mallocing until you get a failure is a mistake. ANSI 
>permits it, but ANSI compliant doesn't mean non-broken. You standards 
>bigots make me sick.

So the fact that it *works* on conforming systems still makes it a mistake?
You irrational blowhards make *me* sick.  "standards bigots" are people who
think it makes sense to save vast amounts of time and money in development
costs by guaranteeing some level of consistency across systems.

>	I repeat, any program that keeps malloc'ing until it gets a NULL 
>is broken, though ANSI compliant, for any OS other than a single-user 
>non-VM one.

This is a unsupported, groundless claim.


-- 
<J Q B>

Path: bga.com!news.sprintlink.net!hookup!newshost.marcam.com!news.kei.com!
travelers.mail.cornell.edu!tuba.cit.cornell.edu!crux5!sl14
From: s...@crux5.cit.cornell.edu (S. Lee)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 20 Mar 1995 06:49:41 GMT
Organization: Nekomi Institute of Technology
Lines: 30
Message-ID: <3kj8i5$qld@tuba.cit.cornell.edu>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<3kbsim$3mi@nz12.rz.uni-karlsruh <3kdnq1$equ@panix3.panix.com> 
<jqbD5pK8C.FM5@netcom.com>
NNTP-Posting-Host: 128.253.232.67

In article <jqbD5pK8C....@netcom.com>, Jim Balter <j...@netcom.com> wrote:
>In article <3kdnq1$...@panix3.panix.com>,
>S. Joel Katz <stimp...@panix.com> wrote:
>>
>>	Fail means get a NULL return in this context. Under ANY 
>>multi-user OS, mallocing until you get a failure is a mistake. ANSI 
>>permits it, but ANSI compliant doesn't mean non-broken. You standards 
>>bigots make me sick.
>
>So the fact that it *works* on conforming systems still makes it a mistake?
>You irrational blowhards make *me* sick.  "standards bigots" are people who
>think it makes sense to save vast amounts of time and money in development
>costs by guaranteeing some level of consistency across systems.

As much as I disagree with SJK, let's watch ourselves and not make this
thread a flamefest.

I understand that we're paying Linus $0.00 here, so let's not *expect* him
to fix it (but it is would be *really* nice if he would :).  One day when
this irritates me enough I'll fix it myself (or pay somebody to).

Or we can try to corrupt Linus "don't tell me HD is $0.5 a meg" by sending
him a 2GB harddisk...

BTW, the price of HD space is actually closer to US$0.33 a meg these days.

Stephen
--
s...@cornell.edu
Witty .sig under construction.

Newsgroups: comp.os.linux.development.apps
Path: bga.com!news.sprintlink.net!howland.reston.ans.net!xlink.net!
freinet.de!kroete2!erik
From: e...@kroete2.freinet.de (Erik Corry)
Subject: Re: Linux is 'creating' memory ?!
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<5715@artcom0.north.de> <bruce-2502950933310001@17.205.4.52> 
<3itc77$9lj@ninurta.fer.uni-lj.si> <3jju43$gc8@klaava.helsinki.fi>
Organization: Home (Freiburg)
Date: Sun, 19 Mar 1995 17:36:25 GMT
X-Newsreader: TIN [version 1.2 PL2]
Message-ID: <D5p7Kp.50@kroete2.freinet.de>
Lines: 51

There does seem to be a solution to this huge discussion. My
proposal is:

a) Linux has 2 modes: safe and cheap. In safe mode no overcommitting
   is done. Cheap mode is the current situation.

b) In safe mode you cannot use mem+swap for your memory, but have
   only swap as the maximum usable memory. This means you can
   only move to safe mode once swapping has started. This may
   not be strictly necessary, but it solves the problem of
   what to do if a low-on-memory system maps a file and then
   accesses big parts of it. It is also very common on other
   Unixes.

c) Even in safe mode a program can be killed if it extends its
   stack. The exception to this is if the program has explicitly
   set a limit to the size of its stack. In that case this amount
   of stack space is guaranteed. Whether this should be done
   using the exiting limit setting functions is a point to be
   decided. This would mean that limiting the stack size can
   fail with an ENOMEM!

d) Even in safe mode a program writing to its text segment can
   run out of memory. This only happens at program start in most
   programs I think (linkage fixup etc.).

e) In safe mode, fork will fail if there is no space for the
   data segments to be there twice. This does not apply to the
   text segment, but see point 4. Vfork could succeed even if
   there is not enough space for data twice, but this would not
   be the same as the vfork on BSD Unixes. Just a reuse of the
   name vfork. A program using vfork would have to accept that
   it could crash if it dirties a lot of pages before execing.

f) On exec, space is reserved for shared writable data segments.
   This means that things like the large tables in xterm do
   not come for free anymore, even though they are rarely
   altered. Actually, I think they are never altered, in
   which case they should be declared const.

DEC OSF/1 works something like this ie. there are two modes.
I think almost everyone will use the cheap mode, not the safe
mode. The question is whether anyone can be bothered to
implement safe mode under these circumstances. Note that safe
mode would probably be a lot better than what most Unixes offer,
but would require large amounts of never-used swap space.

I think this deals with the examples Linus came up with.

-- 
Erik Corry, Freiburg, Germany, +49 761 406637 e...@kroete2.freinet.de

Newsgroups: comp.os.linux.development.apps
Path: nntp.gmd.de!dearn!blekul11!ccsdec1.ufsia.ac.be!reks.uia.ac.be!
idefix.CS.kuleuven.ac.be!
 Belgium.EU.net!EU.net!howland.reston.ans.net!xlink.net!freinet.de!
kroete2!erik
From: e...@kroete2.freinet.de (Erik Corry)
Subject: Re: Linux is 'creating' memory ?!
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<3itc77$9lj@ninurta.fer.uni-lj.si> <3jju43$gc8@klaava.helsinki.fi> 
<D5p7Kp.50@kroete2.freinet.de> <3kmnhh$9pg@usenet.srv.cis.pitt.edu>
Organization: Home (Freiburg)
Date: Wed, 22 Mar 1995 02:45:26 GMT
X-Newsreader: TIN [version 1.2 PL2]
Message-ID: <D5tMBr.192@kroete2.freinet.de>
Lines: 112

Doug DeJulio (d...@pitt.edu) wrote:
: In article <D5p7Kp...@kroete2.freinet.de>,
: Erik Corry <e...@kroete2.freinet.de> wrote:
: >There does seem to be a solution to this huge discussion. My
: >proposal is:
: >
: >a) Linux has 2 modes: safe and cheap. In safe mode no overcommitting
: >   is done. Cheap mode is the current situation.

: (sounds good)

: >b) In safe mode you cannot use mem+swap for your memory, but have
: >   only swap as the maximum usable memory. This means you can
: >   only move to safe mode once swapping has started. This may
: >   not be strictly necessary, but it solves the problem of
: >   what to do if a low-on-memory system maps a file and then
: >   accesses big parts of it. It is also very common on other
: >   Unixes.

: I don't see why this is neccesary.  This step could be left out.  One
: could use a "safe" system with no swapfile at all, but it would not
: access some parts of RAM if you did things that way.

If it doesn't access the extra RAM at all, then what effect does
it have? If it only accesses the RAM under specific circumstances
then I would like to know what they are. Maybe we have finally found
a use for swapping to a ramdisk! :-)

: >c) Even in safe mode a program can be killed if it extends its
: >   stack. The exception to this is if the program has explicitly
: >   set a limit to the size of its stack. In that case this amount
: >   of stack space is guaranteed. Whether this should be done
: >   using the exiting limit setting functions is a point to be
: >   decided. This would mean that limiting the stack size can
: >   fail with an ENOMEM!

: (sounds good)

: >d) Even in safe mode a program writing to its text segment can
: >   run out of memory. This only happens at program start in most
: >   programs I think (linkage fixup etc.).

: No program should ever write its text segment under any circumstances.
: If stuff needs to be written, put it in the data segment or somewhere
: else writable.  Then that space will be allocated just like any other
: memory, and the failure will occur upon the fork(), not when you dirty
: the page.

The shared library implementation needs this, and you shouldn't
dismiss it out of hand. As I said, this mostly happens at exec time
anyway, so it shouldn't really matter. There are lots of reasons why
a program could fail during exec.

Also, what about dynamically loading shared libraries?

I think my suggestion is a good compromise. You are arguing that a write
to the text area should always cause a segmentation fault - I say only
when you have run out of memory.

: >e) In safe mode, fork will fail if there is no space for the
: >   data segments to be there twice....

: (sounds good)

: >f) On exec, space is reserved for shared writable data segments.

: This sounds like a restatement of (e) to me.  Yah, I'd call this
: neccesary.

The small difference is that here I am talking about several copies
of the same program running, whereas before I was talking about
the time between fork and exec. OK, I admit it, it was a restatement.

:   -----

: Yes, I agree most home hackers won't use safe mode.  Not all Linux
: systems are used by hackers tinkering at home though.

: If you're running an internet service provider on a group of Linux
: boxes, or using them as compute servers at a University, or running
: simulations on them in a scientific lab, or doing databases for a
: multi-billion dollar bank, you'd probably want safe mode.  I'd just
: like to see the option for these folks to use Linux without running
: big risks, that's all.

I tbink hardly anybody will use the safe mode. Maybe I should have
called it 'conservative mode'. If you are running one of these
applications then it is a major disaster when you run out of memory.
I don't think things will be much better in conservative mode, since
hardly any program is written to avoid running out of memory when the
stack grows. Using my suggestion you could rewrite the program
(assuming it has finite maximum stack size), but lets face it, nobody
will.

Most non-hacker users will use cheap mode too, because they do not want
to waste their precious disk. If you make the swapspace so big that the
system runs properly in safe/conservative mode, then you could just
as easily use cheap mode. Your swap space will be so much larger than
your main memory that you will probably never have any problems anyway.
Do you know how well a machine runs when four-five times physical memory
is in use? Miserably.

Linus or somebody will probably be forced to do something about this
anyway just to shut up the moaners and the people who say 'My program
segfaulted and I think it's caused by a bug in Linux'. Not to mention
those who come running with a copy of the C Standard and insist that
their program crash _so_ and not _so_ when VM is exhausted. As we've
seen in this thread, Linux is not the only Unix that does things like
this, but it will be used as a stick to bash Linux and Linus with.

--
Erik Corry, Freiburg, Germany, +49 761 406637 e...@kroete2.freinet.de

Newsgroups: comp.os.linux.development.apps
From: ja...@purplet.demon.co.uk (Mike Jagdis)
Path: nntp.gmd.de!news.rwth-aachen.de!news.rhrz.uni-bonn.de!
news.uni-stuttgart.de!rz.uni-karlsruhe.de!xlink.net!howland.reston.ans.net!
pipex!peernews.demon.co.uk!purplet!jaggy
Subject: Re: Linux is 'creating' memory ?!
Organization: FidoNet node 2:252/305 - The Purple Tentacle, Reading
X-Posting-Host: purplet.demon.co.uk
Date: Thu, 23 Mar 1995 21:03:00 +0000
Message-ID: <832.2F71F21E@purplet.demon.co.uk>
Sender: use...@demon.co.uk
Lines: 27

* In message <D5tMBr....@kroete2.freinet.de>, Erik Corry said:

EC> Linus or somebody will probably be forced to do something
EC> about this
EC> anyway just to shut up the moaners and the people who say
EC> 'My program
EC> segfaulted and I think it's caused by a bug in Linux'. Not

Why on earth don't the moaners do something about it since they all seem to 
think it is so trivially easy? Maybe those who claim disk space is so cheap 
we can all afford it would like to prove it by supplying us all with some 
too :-).

  Actually it seems the best general case solution tould be to add something 
like:

        printk(KERN_ERR "catastrophic lack of RAM+swap"
                " - killing processes\n");

at the point where the kernel is forced to fault a process after a failed 
get_free_page().

  I suspect that no one who thinks "safe" mallocing is unnecessary is going 
to go to great lengths to do more...

                                Mike  

Path: nntp.gmd.de!news.rwth-aachen.de!news.rhrz.uni-bonn.de!
news.uni-stuttgart.de!rz.uni-karlsruhe.de!xlink.net!howland.reston.ans.net!
pipex!sunic!sunic.sunet.se!news.funet.fi!news.csc.fi!news.helsinki.fi!
not-for-mail
From: torva...@cc.Helsinki.FI (Linus Torvalds)
Newsgroups: comp.os.linux.development.apps
Subject: Re: Linux is 'creating' memory ?!
Date: 23 Mar 1995 22:16:30 +0200
Organization: University of Helsinki
Lines: 16
Sender: torva...@cc.helsinki.fi
Message-ID: <3kskuu$g27@klaava.helsinki.fi>
References: <1995Feb7.172606.5784@tudedv.et.tudelft.nl> 
<3kdnq1$equ@panix3.panix.com> <jqbD5pK8C.FM5@netcom.com> 
<3kj8i5$qld@tuba.cit.cornell.edu>
NNTP-Posting-Host: klaava.helsinki.fi
Mime-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 8bit

In article <3kj8i5$...@tuba.cit.cornell.edu>,
S. Lee <s...@crux5.cit.cornell.edu> wrote:
>
>Or we can try to corrupt Linus "don't tell me HD is $0.5 a meg" by sending
>him a 2GB harddisk...

Oh, I'm totally incorrigible^H^H^H^H^H^Hcorruptable.  I actually have
more than 3GB of harddisk in my two machines if you count the alpha DEC
gave me.  It's not really a question of harddisk space for me.. 

>BTW, the price of HD space is actually closer to US$0.33 a meg these days.

It's a lot lower if you can get large companies to send you hardware for
free.. ;-)

		Linus