Path: archiver1.google.com!news1.google.com!sn-xit-03!sn-xit-01!sn-xit-09!
supernews.com!news.maxwell.syr.edu!news.kiev.sovam.com!Svitonline.COM!
carrier.kiev.ua!not-for-mail
From: Andrew Morton <a...@digeo.com>
Newsgroups: lucky.linux.kernel
Subject: IO scheduler benchmarking
Date: Fri, 21 Feb 2003 05:25:24 +0000 (UTC)
Organization: unknown
Lines: 50
Sender: n...@horse.lucky.net
Approved: newsmas...@lucky.net
Message-ID: <20030220212304.4712fee9.akpm@digeo.com.lucky.linux.kernel>
NNTP-Posting-Host: horse.carrier.kiev.ua
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
X-Trace: horse.lucky.net 1045805125 38667 193.193.193.118 (21 Feb 2003 05:25:25 GMT)
X-Complaints-To: usenet@horse.lucky.net
NNTP-Posting-Date: Fri, 21 Feb 2003 05:25:25 +0000 (UTC)
X-Mailer: Sylpheed version 0.8.9 (GTK+ 1.2.10; i586-pc-linux-gnu)
X-OriginalArrivalTime: 21 Feb 2003 05:21:26.0457 (UTC) FILETIME=[14DC4A90:01C2D969]
X-Mailing-List: 	linux-kernel@vger.kernel.org


Following this email are the results of a number of tests of various I/O
schedulers:

- Anticipatory Scheduler (AS) (from 2.5.61-mm1 approx)

- CFQ (as in 2.5.61-mm1)

- 2.5.61+hacks (Basically 2.5.61 plus everything before the anticipatory
  scheduler - tweaks which fix the writes-starve-reads problem via a
  scheduling storm)

- 2.4.21-pre4

All these tests are simple things from the command line.

I stayed away from the standard benchmarks because they do not really touch
on areas where the Linux I/O scheduler has traditionally been bad.  (If they
did, perhaps it wouldn't have been so bad..)

Plus all the I/O schedulers perform similarly with the usual benchmarks. 
With the exception of some tiobench phases, where AS does very well.

Executive summary: the anticipatory scheduler is wiping the others off the
map, and 2.4 is a disaster.

I really have not sought to make the AS look good - I mainly concentrated on
things which we have traditonally been bad at.  If anyone wants to suggest
other tests, please let me know.

The known regressions from the anticipatory scheduler are:

1) 15% (ish) slowdown in David Mansfield's database run.  This appeared to
   go away in later versions of the scheduler.

2) 5% dropoff in single-threaded qsbench swapstorms

3) 30% dropoff in write bandwidth when there is a streaming read (this is
   actually good).

The test machine is a fast P4-HT with 256MB of memory.  Testing was against a
single fast IDE disk, using ext2.



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Re: IO scheduler benchmarking
From: David Lang (david.lang@digitalinsight.com)
Date: Fri Feb 21 2003 - 01:51:37 EST 

one other useful test would be the time to copy a large (multi-gig) file. 
currently this takes forever and uses very little fo the disk bandwidth, I 
suspect that the AS would give more preference to reads and therefor would 
go faster. 

for a real-world example, mozilla downloads files to a temp directory and 
then copies it to the premanent location. When I download a video from my 
tivo it takes ~20 min to download a 1G video, during which time the system 
is perfectly responsive, then after the download completes when mozilla 
copies it to the real destination (on a seperate disk so it is a copy, not 
just a move) the system becomes completely unresponsive to anything 
requireing disk IO for several min. 

David Lang 

On Thu, 20 Feb 2003, Andrew Morton wrote: 

> Date: Thu, 20 Feb 2003 21:23:04 -0800 
> From: Andrew Morton  
> To: linux-kernel@vger.kernel.org 
> Subject: IO scheduler benchmarking 
> 
> 
> Following this email are the results of a number of tests of various I/O 
> schedulers: 
> 
> - Anticipatory Scheduler (AS) (from 2.5.61-mm1 approx) 
> 
> - CFQ (as in 2.5.61-mm1) 
> 
> - 2.5.61+hacks (Basically 2.5.61 plus everything before the anticipatory 
> scheduler - tweaks which fix the writes-starve-reads problem via a 
> scheduling storm) 
> 
> - 2.4.21-pre4 
> 
> All these tests are simple things from the command line. 
> 
> I stayed away from the standard benchmarks because they do not really touch 
> on areas where the Linux I/O scheduler has traditionally been bad. (If they 
> did, perhaps it wouldn't have been so bad..) 
> 
> Plus all the I/O schedulers perform similarly with the usual benchmarks. 
> With the exception of some tiobench phases, where AS does very well. 
> 
> Executive summary: the anticipatory scheduler is wiping the others off the 
> map, and 2.4 is a disaster. 
> 
> I really have not sought to make the AS look good - I mainly concentrated on 
> things which we have traditonally been bad at. If anyone wants to suggest 
> other tests, please let me know. 
> 
> The known regressions from the anticipatory scheduler are: 
> 
> 1) 15% (ish) slowdown in David Mansfield's database run. This appeared to 
> go away in later versions of the scheduler. 
> 
> 2) 5% dropoff in single-threaded qsbench swapstorms 
> 
> 3) 30% dropoff in write bandwidth when there is a streaming read (this is 
> actually good). 
> 
> The test machine is a fast P4-HT with 256MB of memory. Testing was against a 
> single fast IDE disk, using ext2. 
> 
> 
> 
> - 
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in 
> the body of a message to majordomo@vger.kernel.org 
> More majordomo info at http://vger.kernel.org/majordomo-info.html 
> Please read the FAQ at http://www.tux.org/lkml/ 
> 
- 
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in 
the body of a message to majordomo@vger.kernel.org 
More majordomo info at http://vger.kernel.org/majordomo-info.html 
Please read the FAQ at http://www.tux.org/lkml/ 

Path: archiver1.google.com!news2.google.com!news1.google.com!sn-xit-03!
sn-xit-01!sn-xit-06!sn-xit-09!supernews.com!news.maxwell.syr.edu!
news.kiev.sovam.com!Svitonline.COM!carrier.kiev.ua!not-for-mail
From: Andrew Morton <a...@digeo.com>
Newsgroups: lucky.linux.kernel
Subject: Re: IO scheduler benchmarking
Date: Fri, 21 Feb 2003 08:16:57 +0000 (UTC)
Organization: unknown
Lines: 39
Sender: n...@horse.lucky.net
Approved: newsmas...@lucky.net
Message-ID: <20030221001624.278ef232.akpm@digeo.com.lucky.linux.kernel>
References: <20030220212304.4712fee9.akpm@digeo.com>
	<Pine.LNX.4.44.0302202247110.12601-100000@dlang.diginsite.com>
NNTP-Posting-Host: horse.carrier.kiev.ua
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
X-Trace: horse.lucky.net 1045815417 31165 193.193.193.118 (21 Feb 2003 08:16:57 GMT)
X-Complaints-To: usenet@horse.lucky.net
NNTP-Posting-Date: Fri, 21 Feb 2003 08:16:57 +0000 (UTC)
In-Reply-To: <Pine.LNX.4.44.0302202247110.12601-100000@dlang.diginsite.com>
X-Mailer: Sylpheed version 0.8.9 (GTK+ 1.2.10; i586-pc-linux-gnu)
X-OriginalArrivalTime: 21 Feb 2003 08:14:44.0637 (UTC) FILETIME=[4AA8A4D0:01C2D981]
X-Mailing-List: 	linux-kernel@vger.kernel.org
X-Comment-To: David Lang

David Lang <david.l...@digitalinsight.com> wrote:
>
> one other useful test would be the time to copy a large (multi-gig) file.
> currently this takes forever and uses very little fo the disk bandwidth, I
> suspect that the AS would give more preference to reads and therefor would
> go faster.

Yes, that's a test.

	time (cp 1-gig-file foo ; sync)

2.5.62-mm2,AS:		1:22.36
2.5.62-mm2,CFQ:		1:25.54
2.5.62-mm2,deadline:	1:11.03
2.4.21-pre4:		1:07.69

Well gee.


> for a real-world example, mozilla downloads files to a temp directory and
> then copies it to the premanent location. When I download a video from my
> tivo it takes ~20 min to download a 1G video, during which time the system
> is perfectly responsive, then after the download completes when mozilla
> copies it to the real destination (on a seperate disk so it is a copy, not
> just a move) the system becomes completely unresponsive to anything
> requireing disk IO for several min.

Well 2.4 is unreponsive period.  That's due to problems in the VM - processes
which are trying to allocate memory get continually DoS'ed by `cp' in page
reclaim.

For the reads-starved-by-writes problem which you describe, you'll see that
quite a few of the tests did cover that.  contest does as well.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!sn-xit-02!sn-xit-03!
sn-xit-01!sn-xit-08!supernews.com!38.144.126.75.MISMATCH!
feed1.newsreader.com!newsreader.com!news-spur1.maxwell.syr.edu!
news.maxwell.syr.edu!news.kiev.sovam.com!Svitonline.COM!carrier.kiev.ua!
not-for-mail
From: Andrea Arcangeli <and...@suse.de>
Newsgroups: lucky.linux.kernel
Subject: Re: IO scheduler benchmarking
Date: Fri, 21 Feb 2003 10:34:33 +0000 (UTC)
Organization: unknown
Lines: 55
Sender: n...@horse.lucky.net
Approved: newsmas...@lucky.net
Message-ID: <20030221103140.GN31480@x30.school.suse.de.lucky.linux.kernel>
References: 
<20030220212304.4712fee9.akpm@digeo.com> 
<Pine.LNX.4.44.0302202247110.12601-100000@dlang.diginsite.com> 
<20030221001624.278ef232.akpm@digeo.com>
NNTP-Posting-Host: horse.carrier.kiev.ua
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
X-Trace: horse.lucky.net 1045823673 6095 193.193.193.118 (21 Feb 2003 10:34:33 GMT)
X-Complaints-To: usenet@horse.lucky.net
NNTP-Posting-Date: Fri, 21 Feb 2003 10:34:33 +0000 (UTC)
X-Authentication-Warning: athlon.random: andrea set sender to and...@suse.de using -f
Content-Disposition: inline
In-Reply-To: <20030221001624.278ef232.akpm@digeo.com>
User-Agent: Mutt/1.4i
X-GPG-Key: 1024D/68B9CB43
X-PGP-Key: 1024R/CB4660B9
X-Mailing-List: 	linux-kernel@vger.kernel.org
X-Comment-To: Andrew Morton

On Fri, Feb 21, 2003 at 12:16:24AM -0800, Andrew Morton wrote:
> Yes, that's a test.
> 
> 	time (cp 1-gig-file foo ; sync)
> 
> 2.5.62-mm2,AS:		1:22.36
> 2.5.62-mm2,CFQ:		1:25.54
> 2.5.62-mm2,deadline:	1:11.03
> 2.4.21-pre4:		1:07.69
> 
> Well gee.

It's pointless to benchmark CFQ in a workload like that IMHO. if you
read and write to the same harddisk you want lots of unfariness to go
faster.  Your latency is the mixture of read and writes and the writes
are run by the kernel likely so CFQ will likely generate more seeks (it
also depends if you have the magic for the current->mm == NULL).

You should run something on these lines to measure the difference:

	dd if=/dev/zero of=readme bs=1M count=2000
	sync
	cp /dev/zero . & time cp readme /dev/null

And the best CFQ benchmark really is to run tiobench read test with 1
single thread during the `cp /dev/zero .`. That will measure the worst
case latency that `read` provided during the benchmark, and it should
make the most difference because that is definitely the only thing one
can care about if you need CFQ or SFQ. You don't care that much about
throughput if you enable CFQ, so it's not even correct to even benchmark in
function of real time, but only the worst case `read` latency matters.

> > for a real-world example, mozilla downloads files to a temp directory and
> > then copies it to the premanent location. When I download a video from my
> > tivo it takes ~20 min to download a 1G video, during which time the system
> > is perfectly responsive, then after the download completes when mozilla
> > copies it to the real destination (on a seperate disk so it is a copy, not
> > just a move) the system becomes completely unresponsive to anything
> > requireing disk IO for several min.
> 
> Well 2.4 is unreponsive period.  That's due to problems in the VM - processes
> which are trying to allocate memory get continually DoS'ed by `cp' in page
> reclaim.

this depends on the workload, you may not have that many allocations,
a echo 1 >/proc/sys/vm/bdflush will fix it shall your workload be hurted
by too much dirty cache. Furthmore elevator-lowlatency makes
the blkdev layer much more fair under load.

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!sn-xit-03!sn-xit-01!
sn-xit-08!supernews.com!newsfeed.news2me.com!newsfeed.media.kyoto-u.ac.jp!
news2.dg.net.ua!carrier.kiev.ua!not-for-mail
From: rwh...@earthlink.net
Newsgroups: lucky.linux.kernel
Subject: Re: IO scheduler benchmarking
Date: Tue, 25 Feb 2003 05:32:34 +0000 (UTC)
Organization: unknown
Lines: 59
Sender: n...@horse.lucky.net
Approved: newsmas...@lucky.net
Message-ID: <20030225053547.GA1571@rushmore.lucky.linux.kernel>
NNTP-Posting-Host: horse.carrier.kiev.ua
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
X-Trace: horse.lucky.net 1046151154 53658 193.193.193.118 (25 Feb 2003 05:32:34 GMT)
X-Complaints-To: usenet@horse.lucky.net
NNTP-Posting-Date: Tue, 25 Feb 2003 05:32:34 +0000 (UTC)
Content-Disposition: inline
User-Agent: Mutt/1.4i
X-Mailing-List: 	linux-kernel@vger.kernel.org
X-Comment-To: a...@digeo.com

Executive question: Why does 2.5.62-mm2 have higher sequential
write latency than 2.5.61-mm1?

tiobench numbers on uniprocessor single disk IDE:
The cfq scheduler (2.5.62-mm2 and 2.5.61-cfq) has a big latency
regression.

2.5.61-mm1		(default scheduler (anticipatory?))
2.5.61-mm1-cfq		elevator=cfq
2.5.62-mm2-as		anticipatory scheduler
2.5.62-mm2-dline	elevator=deadline
2.5.62-mm2		elevator=cfq

                    Thr  MB/sec   CPU%     avg lat      max latency
2.5.61-mm1            8   15.68   54.42%     5.87 ms     2.7 seconds
2.5.61-mm1-cfq        8    9.60   15.07%     7.54      393.0
2.5.62-mm2-as         8   14.76   52.04%     6.14        4.5
2.5.62-mm2-dline      8    9.91   13.90%     9.41         .8
2.5.62-mm2            8    9.83   15.62%     7.38      408.9
2.4.21-pre3           8   10.34   27.66%     8.80        1.0
2.4.21-pre3-ac4       8   10.53   28.41%     8.83         .6
2.4.21-pre3aa1        8   18.55   71.95%     3.25       87.6


For most thread counts (8 - 128), the anticipatory scheduler has roughly 
45% higher ext2 sequential read throughput.  Latency was higher than 
deadline, but a lot lower than cfq.

For tiobench sequential writes, the max latency numbers for 2.4.21-pre3
are notably lower than 2.5.62-mm2 (but not as good as 2.5.61-mm1).  
This is with 16 threads.  

                    Thr  MB/sec   CPU%      avg lat     max latency
2.5.61-mm1           16   18.30   81.12%     9.159 ms     6.1 seconds
2.5.61-mm1-cfq       16   18.03   80.71%     9.086        6.1
2.5.62-mm2-as        16   18.84   84.25%     8.620       47.7
2.5.62-mm2-dline     16   18.53   84.10%     8.967       53.4
2.5.62-mm2           16   18.46   83.28%     8.521       40.8
2.4.21-pre3          16   16.20   65.13%     9.566        8.7
2.4.21-pre3-ac4      16   18.50   83.68%     8.774       11.6
2.4.21-pre3aa1       16   18.49   88.10%     8.455        7.5

Recent uniprocessor benchmarks:
http://home.earthlink.net/~rwhron/kernel/latest.html

More uniprocessor benchmarks:
http://home.earthlink.net/~rwhron/kernel/k6-2-475.html

-- 
Randy Hron
http://home.earthlink.net/~rwhron/kernel/bigbox.html
latest quad xeon benchmarks:
http://home.earthlink.net/~rwhron/kernel/blatest.html

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!sn-xit-02!sn-xit-04!
sn-xit-06!sn-xit-08!supernews.com!64.152.100.70.MISMATCH!sjc70.webusenet.com!
c03.atl99!news.webusenet.com!diablo.voicenet.com!news.kiev.sovam.com!
Svitonline.COM!carrier.kiev.ua!not-for-mail
From: Andrew Morton <a...@digeo.com>
Newsgroups: lucky.linux.kernel
Subject: Re: IO scheduler benchmarking
Date: Tue, 25 Feb 2003 06:42:58 +0000 (UTC)
Organization: unknown
Lines: 58
Sender: n...@horse.lucky.net
Approved: newsmas...@lucky.net
Message-ID: <20030224223858.52c61880.akpm@digeo.com.lucky.linux.kernel>
References: <20030225053547.GA1571@rushmore>
NNTP-Posting-Host: horse.carrier.kiev.ua
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
X-Trace: horse.lucky.net 1046155378 86185 193.193.193.118 (25 Feb 2003 06:42:58 GMT)
X-Complaints-To: usenet@horse.lucky.net
NNTP-Posting-Date: Tue, 25 Feb 2003 06:42:58 +0000 (UTC)
In-Reply-To: <20030225053547.GA1571@rushmore>
X-Mailer: Sylpheed version 0.8.9 (GTK+ 1.2.10; i586-pc-linux-gnu)
X-OriginalArrivalTime: 25 Feb 2003 06:38:35.0872 (UTC) FILETIME=[85DBF600:01C2DC98]
X-Mailing-List: 	linux-kernel@vger.kernel.org
X-Comment-To: rwh...@earthlink.net

rwh...@earthlink.net wrote:
>
> Executive question: Why does 2.5.62-mm2 have higher sequential
> write latency than 2.5.61-mm1?

Well bear in mind that we sometimes need to perform reads to be able to
perform writes.  So the way tiobench measures it, you could be seeing
read-vs-write latencies here.

And there are various odd interactions in, at least, ext3.  You did not
specify which filesystem was used.

>  ...
>                     Thr  MB/sec   CPU%     avg lat      max latency
> 2.5.62-mm2-as         8   14.76   52.04%     6.14        4.5
> 2.5.62-mm2-dline      8    9.91   13.90%     9.41         .8
> 2.5.62-mm2            8    9.83   15.62%     7.38      408.9

Fishiness.  2.5.62-mm2 _is_ 2.5.62-mm2-as.  Why the 100x difference?

That 408 seconds looks suspect.


I don't know what tiobench is doing in there, really.  I find it more useful
to test simple things, which I can understand.  If you want to test write
latency, do this:

	while true
	do
		write-and-fsync -m 200 -O -f foo
	done

Maybe run a few of these.  This command will cause a continuous streaming
file overwrite.


then do:

	time write-and-fsync -m1 -f foo

this will simply write a megabyte file, fsync it and exit.

You need to be careful with this - get it wrong and most of the runtime is
actually paging the executables back in.  That is why the above background
load is just reusing the same pagecache over and over.

The latency which I see for the one megabyte write and fsync varies a lot. 
From one second to ten.  That's with the deadline scheduler.

There is a place in VFS where one writing task could accidentally hammer a
different one.  I cannot trigger that, but I'll fix it up in next -mm.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!sn-xit-02!sn-xit-06!
sn-xit-09!supernews.com!news.maxwell.syr.edu!news.kiev.sovam.com!
Svitonline.COM!carrier.kiev.ua!not-for-mail
From: rwh...@earthlink.net
Newsgroups: lucky.linux.kernel
Subject: Re: IO scheduler benchmarking
Date: Tue, 25 Feb 2003 12:55:36 +0000 (UTC)
Organization: unknown
Lines: 48
Sender: n...@horse.lucky.net
Approved: newsmas...@lucky.net
Message-ID: <20030225125942.GA1657@rushmore.lucky.linux.kernel>
NNTP-Posting-Host: horse.carrier.kiev.ua
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
X-Trace: horse.lucky.net 1046177736 89618 193.193.193.118 (25 Feb 2003 12:55:36 GMT)
X-Complaints-To: usenet@horse.lucky.net
NNTP-Posting-Date: Tue, 25 Feb 2003 12:55:36 +0000 (UTC)
Content-Disposition: inline
User-Agent: Mutt/1.4i
X-Mailing-List: 	linux-kernel@vger.kernel.org
X-Comment-To: a...@digeo.com

>> Why does 2.5.62-mm2 have higher sequential
>> write latency than 2.5.61-mm1?

> And there are various odd interactions in, at least, ext3.  You did not
> specify which filesystem was used.

ext2

>>                     Thr  MB/sec   CPU%     avg lat      max latency
>> 2.5.62-mm2-as         8   14.76   52.04%     6.14        4.5
>> 2.5.62-mm2-dline      8    9.91   13.90%     9.41         .8
>> 2.5.62-mm2            8    9.83   15.62%     7.38      408.9

> Fishiness.  2.5.62-mm2 _is_ 2.5.62-mm2-as.  Why the 100x difference?

Bad EXTRAVERSION naming on my part.  2.5.62-mm2 _was_ booted with 
elevator=cfq.

How it happened:
2.5.61-mm1 tested
2.5.61-mm1-cfq tested and elevator=cfq added to boot flags
2.5.62-mm1 tested (elevator=cfq still in lilo boot boot flags)
Then to test the other two schedulers I changed extraversion and boot
flags.

> That 408 seconds looks suspect.

AFAICT, that's the one request in over 500,000 that took the longest.
The numbers are fairly consistent.  How relevant they are is debatable.  

> If you want to test write latency, do this:

Your approach is more realistic than tiobench.  

> There is a place in VFS where one writing task could accidentally hammer a
> different one.  I cannot trigger that, but I'll fix it up in next -mm.

2.5.62-mm3 or 2.5.63-mm1?  (-mm3 is running now)

-- 
Randy Hron
http://home.earthlink.net/~rwhron/kernel/bigbox.html

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!sn-xit-02!sn-xit-03!
sn-xit-06!sn-xit-09!supernews.com!news.maxwell.syr.edu!news.kiev.sovam.com!
Svitonline.COM!carrier.kiev.ua!not-for-mail
From: Andrew Morton <a...@digeo.com>
Newsgroups: lucky.linux.kernel
Subject: Re: IO scheduler benchmarking
Date: Tue, 25 Feb 2003 22:24:09 +0000 (UTC)
Organization: unknown
Lines: 43
Sender: n...@horse.lucky.net
Approved: newsmas...@lucky.net
Message-ID: <20030225140918.197dea73.akpm@digeo.com.lucky.linux.kernel>
References: <20030225125942.GA1657@rushmore>
NNTP-Posting-Host: horse.carrier.kiev.ua
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
X-Trace: horse.lucky.net 1046211849 26574 193.193.193.118 (25 Feb 2003 22:24:09 GMT)
X-Complaints-To: usenet@horse.lucky.net
NNTP-Posting-Date: Tue, 25 Feb 2003 22:24:09 +0000 (UTC)
In-Reply-To: <20030225125942.GA1657@rushmore>
X-Mailer: Sylpheed version 0.8.9 (GTK+ 1.2.10; i586-pc-linux-gnu)
X-OriginalArrivalTime: 25 Feb 2003 22:12:26.0275 (UTC) FILETIME=[FA94F730:01C2DD1A]
X-Mailing-List: 	linux-kernel@vger.kernel.org
X-Comment-To: rwh...@earthlink.net

rwh...@earthlink.net wrote:
>
> >> Why does 2.5.62-mm2 have higher sequential
> >> write latency than 2.5.61-mm1?
> 
> > And there are various odd interactions in, at least, ext3.  You did not
> > specify which filesystem was used.
> 
> ext2
> 
> >>                     Thr  MB/sec   CPU%     avg lat      max latency
> >> 2.5.62-mm2-as         8   14.76   52.04%     6.14        4.5
> >> 2.5.62-mm2-dline      8    9.91   13.90%     9.41         .8
> >> 2.5.62-mm2            8    9.83   15.62%     7.38      408.9
> 
> > Fishiness.  2.5.62-mm2 _is_ 2.5.62-mm2-as.  Why the 100x difference?
> 
> Bad EXTRAVERSION naming on my part.  2.5.62-mm2 _was_ booted with 
> elevator=cfq.
> 
> ...
> > That 408 seconds looks suspect.
> 
> AFAICT, that's the one request in over 500,000 that took the longest.
> The numbers are fairly consistent.  How relevant they are is debatable.  

OK.  When I was testing CFQ I saw some odd behaviour, such as a 100%
cessation of reads for periods of up to ten seconds.

So there is some sort of bug in there, and until that is understood we should
not conclude anything at all about CFQ from this testing.

> 2.5.62-mm3 or 2.5.63-mm1?  (-mm3 is running now)

Well I'm showing about seven more AS patches since 2.5.63-mm1 already, so
this is a bit of a moving target.  Sorry.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!uio.no!nntp.uio.no!
ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-owner+fa.linux.kernel=40ifi.uio...@vger.kernel.org>
Original-Date: 	Tue, 25 Feb 2003 16:57:55 -0500
To: linux-ker...@vger.kernel.org
Cc: a...@digeo.com
Subject: Re: IO scheduler benchmarking
Original-Message-ID: <20030225215755.GA2038@rushmore>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
User-Agent: Mutt/1.4i
From: rwh...@earthlink.net
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Tue, 25 Feb 2003 23:57:35 GMT
Message-ID: <fa.f4nsjj8.1u2ms8g@ifi.uio.no>
Lines: 72

> Why does 2.5.62-mm2 have higher sequential
> write latency than 2.5.61-mm1?

Anticipatory scheduler tiobench profile on uniprocessor:

                              2.5.61-mm1   2.5.62-mm2
total                           1993387     1933241
default_idle                    1873179     1826650
system_call                       49838       43036
get_offset_tsc                    21905       20883
do_schedule                       13893       10344
do_gettimeofday                    8478        6044
sys_gettimeofday                   8077        5153
current_kernel_time                4904       12165
syscall_exit                       4047        1243
__wake_up                          1274        1000
io_schedule                        1166        1039
prepare_to_wait                    1093         792
schedule_timeout                    612         366
delay_tsc                           502         443
get_fpu_cwd                         473         376
syscall_call                        389         378
math_state_restore                  354         271
restore_fpu                         329         287
del_timer                           325         200
device_not_available                290         377
finish_wait                         257         181
add_timer                           218         137
io_schedule_timeout                 195          72
cpu_idle                            193         218
run_timer_softirq                   137          33
remove_wait_queue                   121         188
eligible_child                      106         154
sys_wait4                           105         162
work_resched                        104         110
ret_from_intr                        97          74
dup_task_struct                      75          48
add_wait_queue                       67         124
__cond_resched                       59          69
do_page_fault                        55           0
do_softirq                           53          12
pte_alloc_one                        51          67
release_task                         44          55
get_signal_to_deliver                38          43
get_wchan                            16          10
mod_timer                            15           0
old_mmap                             14          19
prepare_to_wait_exclusive            10          32
mm_release                            7           0
release_x86_irqs                      7           8
sys_getppid                           6           5
handle_IRQ_event                      4           0
schedule_tail                         4           0
kill_proc_info                        3           0
device_not_available_emulate          2           0
task_prio                             1           1
__down                                0          33
__down_failed_interruptible           0           3
init_fpu                              0          12
pgd_ctor                              0           3
process_timeout                       0           2
restore_all                           0           2
sys_exit                              0           2
-- 
Randy Hron
http://home.earthlink.net/~rwhron/kernel/bigbox.html

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/