From: wolff@neuron.et.tudelft.nl (Rogier Wolff)
Subject: Linux source code reductions necessary or not?
Date: 13 Mar 92 10:40:46 GMT
Reply-To: wolff@neuron.et.tudelft.nl (Rogier Wolff)


Hi everyone,

Let's recapitulate a little software engineering:

Sourcecode costs money to maintain. In the case of Linux we are all
investing our time (= money) in this. The costs can be expressed 
in a formula like:

                 hardness * size
    costs = C *  ---------------
                    quality

Where 
        C is a constant factor, 
        hardness is large for sytem programming, low for trivial applications, 
        size is the size of the sourcecode in lines. 
and     quality is a measure of the quality of the programming.

In the case of Linux, size is still quite modest (13000 lines), hardness is 
high, we are dealing with an OS, and quality is high (Good programming Linus!).

To reduce maintaining costs, you can influence the two parameters in this 
equation: the size and the quality. The quality is a very hard parameter
to influence, except for that you should attempt to keep it as high as 
possible. However the size parameter is easily influencable in some cases:

for instance, block_read () and block_write () are almost completely identical,
however they are separate routines. If they are merged, the complexity of the
code will increase slightly, (decreasing quality a little), but the size
of the source code will decrease significantly (on this section of the code).

That the maintaining costs of such "identical" routines are really higher
than those for smaller routines, can be demonstrated with an example:
In block_write () some local variables are declared as:

        unsigned int block = filp->f_pos >> BLOCK_SIZE_BITS;
        unsigned int offset = filp->f_pos & (BLOCK_SIZE-1);
        unsigned int chars;
        unsigned int size;

and in block_read () as:

        int block = filp->f_pos >> BLOCK_SIZE_BITS;
        int offset = filp->f_pos & (BLOCK_SIZE-1);
        int chars;
        int size;

I suspect that this is not what was intended, and someone corrected the 
block_write case, but not the block_read . Similar pieces of code
can be found for reading/writing of character devices.  

I propose to merge these very similar routines, and reduce the code
size, being careful not to increase complexity too much.


                                                Roger
-- 
If the opposite of "pro" is "con", what is the opposite of "progress"? 
        (stolen from  kadokev@iitvax ==? technews@iitmax.iit.edu)
EMail:  wolff@duteca.et.tudelft.nl   ** Tel  +31-15-783644 or +31-15-142371

From: db1@ukc.ac.uk (D.Bolla)
Subject: Re: Linux source code reductions necessary or not?
Date: 13 Mar 92 15:17:03 GMT
Reply-To: db1@ukc.ac.uk (Damiano Bolla)

In article <1992Mar13.104046.27085@donau.et.tudelft.nl> 
wolff@neuron.et.tudelft.nl (Rogier Wolff) writes:

>I propose to merge these very similar routines, and reduce the code
>size, being careful not to increase complexity too much.
Yes. It is good in theory :-)
As usual there are other things that I consider more important
1) Organize the ftp sites such there are different subtrees for different
   releases of linux.
2) Implement the ioctl to change the vga mode ( For X11 )
3) Having a minimal socket library ( for X11 )
4) Modularize the kernel install so you can select which part
   you want.
   ( SCSI driver, Ethernet driver (in the future) and all posible
     rubbish you can think )

Let's create a framework that is good enought to be the same for at least
one year :-) Othervise we have to put a LOT more work to keep trak of what
is goin on !!

BTW: I think that writing the kernel memory is bad in any case.
     One example of thing that is bad in any case is writting a directory
     It can be argued that allowing root to write directly a directory can
     be good thing ( using xvi ).  This is prevented by the kernel, would
     you like to edit a directory "on the fly" too ? :-) :-) <- Smile
     No flames, please. :-)

Damiano

From: tytso@ATHENA.MIT.EDU (Theodore Ts'o)
Subject: Re: Linux source code reductions necessary or not?
Reply-To: tytso@athena.mit.edu
Date: Sat, 14 Mar 1992 00:15:09 GMT

   From: db1@ukc.ac.uk (D.Bolla)
   Date: 13 Mar 92 15:17:03 GMT
   Reply-To: db1@ukc.ac.uk (Damiano Bolla)

   1) Organize the ftp sites such there are different subtrees for different
      releases of linux.

The problem with this suggestion is that there are many things which
will work for multiple releases, and it's painful to have to do figure
out what should be considered Linux 0.12, or 0.13, or 0.11..... for
example, the gcc which I have been using up until very recently was the
original GCC compiler that was released for 0.10.  

It doesn't make sense to ask each FTP site maintainer to make their own
judgements about what binary works for which version of Linux.  I (at
the very least) do not have that kind of time on my hands.

I do recognize the need for what you are requesting, and I have been
pinning my hopes on the "ABC Release" of Linux, which will hopefully
have everything bundled up for people to use.  There will still be a
place for the more chaotic and concontrolled method of FTP distribution,
for people who are willing to live on the bleeding edge of technology.
But for most people, the "ABC Release" should make it much easier to put
together a release.  

Of course, I suspect that the "ABC Release" will lag a bit when compared
to Linus's release of the sources, but if you're really impatient, you
can either do it yourself or pay someone enough money that they feel
like doing it for you on your schedule.  Keep in mind, Linux is free
software, and that means is that while suggesting that someone might
want to do *X* can be productive, demanding that people to do things
just because it makes things more convenient for *you*, at possibly
great time and effort for *them*, doesn't necessarily go over very well.
:-)

                                        - Ted