Path: archiver1.google.com!news1.google.com!sn-xit-02!sn-xit-01!
supernews.com!newshub2.rdc1.sfba.home.com!news.home.com!
newshub1-work.rdc1.sfba.home.com!gehenna.pell.portland.or.us!
nntp-server.caltech.edu!nntp-server.caltech.edu!mail2news96
Newsgroups: mlist.linux.kernel
Date: 	Fri, 4 Jan 2002 03:19:10 +0100 (CET)
From: Ingo Molnar <mi...@elte.hu>
Reply-To: <mi...@elte.hu>
X-To: <linux-ker...@vger.kernel.org>
X-Cc: Linus Torvalds <torva...@transmeta.com>,
        Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
Message-ID: <linux.kernel.Pine.LNX.4.33.0201040050440.1363-100000@localhost.localdomain>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Approved: n...@nntp-server.caltech.edu
Lines: 413


now that new-year's parties are over things are getting boring again. For
those who want to see and perhaps even try something more complex, i'm
announcing this patch that is a pretty radical rewrite of the Linux
scheduler for 2.5.2-pre6:

	http://redhat.com/~mingo/O(1)-scheduler/sched-O1-2.5.2-A0.patch

for 2.4.17:

	http://redhat.com/~mingo/O(1)-scheduler/sched-O1-2.4.17-A0.patch

Goal
====

The main goal of the new scheduler is to keep all the good things we know
and love about the current Linux scheduler:

 - good interactive performance even during high load: if the user
   types or clicks then the system must react instantly and must execute
   the user tasks smoothly, even during considerable background load.

 - good scheduling/wakeup performance with 1-2 runnable processes.

 - fairness: no process should stay without any timeslice for any
   unreasonable amount of time. No process should get an unjustly high
   amount of CPU time.

 - priorities: less important tasks can be started with lower priority,
   more important tasks with higher priority.

 - SMP efficiency: no CPU should stay idle if there is work to do.

 - SMP affinity: processes which run on one CPU should stay affine to
   that CPU. Processes should not bounce between CPUs too frequently.

 - plus additional scheduler features: RT scheduling, CPU binding.

and the goal is also to add a few new things:

 - fully O(1) scheduling. Are you tired of the recalculation loop
   blowing the L1 cache away every now and then? Do you think the goodness
   loop is taking a bit too long to finish if there are lots of runnable
   processes? This new scheduler takes no prisoners: wakeup(), schedule(),
   the timer interrupt are all O(1) algorithms. There is no recalculation
   loop. There is no goodness loop either.

 - 'perfect' SMP scalability. With the new scheduler there is no 'big'
   runqueue_lock anymore - it's all per-CPU runqueues and locks - two
   tasks on two separate CPUs can wake up, schedule and context-switch
   completely in parallel, without any interlocking. All
   scheduling-relevant data is structured for maximum scalability. (see
   the benchmark section later on.)

 - better SMP affinity. The old scheduler has a particular weakness that
   causes the random bouncing of tasks between CPUs if/when higher
   priority/interactive tasks, this was observed and reported by many
   people. The reason is that the timeslice recalculation loop first needs
   every currently running task to consume its timeslice. But when this
   happens on eg. an 8-way system, then this property starves an
   increasing number of CPUs from executing any process. Once the last
   task that has a timeslice left has finished using up that timeslice,
   the recalculation loop is triggered and other CPUs can start executing
   tasks again - after having idled around for a number of timer ticks.
   The more CPUs, the worse this effect.

   Furthermore, this same effect causes the bouncing effect as well:
   whenever there is such a 'timeslice squeeze' of the global runqueue,
   idle processors start executing tasks which are not affine to that CPU.
   (because the affine tasks have finished off their timeslices already.)

   The new scheduler solves this problem by distributing timeslices on a
   per-CPU basis, without having any global synchronization or
   recalculation.

 - batch scheduling. A significant proportion of computing-intensive tasks
   benefit from batch-scheduling, where timeslices are long and processes
   are roundrobin scheduled. The new scheduler does such batch-scheduling
   of the lowest priority tasks - so nice +19 jobs will get
   'batch-scheduled' automatically. With this scheduler, nice +19 jobs are
   in essence SCHED_IDLE, from an interactiveness point of view.

 - handle extreme loads more smoothly, without breakdown and scheduling
   storms.

 - O(1) RT scheduling. For those RT folks who are paranoid about the
   O(nr_running) property of the goodness loop and the recalculation loop.

 - run fork()ed children before the parent. Andrea has pointed out the
   advantages of this a few months ago, but patches for this feature
   do not work with the old scheduler as well as they should,
   because idle processes often steal the new child before the fork()ing
   CPU gets to execute it.


Design
======

(those who find the following design issues boring can skip to the next,
'Benchmarks' section.)

the core of the new scheduler are the following mechanizms:

 - *two*, priority-ordered 'priority arrays' per CPU. There is an 'active'
   array and an 'expired' array. The active array contains all tasks that
   are affine to this CPU and have timeslices left. The expired array
   contains all tasks which have used up their timeslices - but this array
   is kept sorted as well. The active and expired array is not accessed
   directly, it's accessed through two pointers in the per-CPU runqueue
   structure. If all active tasks are used up then we 'switch' the two
   pointers and from now on the ready-to-go (former-) expired array is the
   active array - and the empty active array serves as the new collector
   for expired tasks.

 - there is a 64-bit bitmap cache for array indices. Finding the highest
   priority task is thus a matter of two x86 BSFL bit-search instructions.

the split-array solution enables us to have an arbitrary number of active
and expired tasks, and the recalculation of timeslices can be done
immediately when the timeslice expires. Because the arrays are always
access through the pointers in the runqueue, switching the two arrays can
be done very quickly.

this is a hybride priority-list approach coupled with roundrobin
scheduling and the array-switch method of distributing timeslices.

 - there is a per-task 'load estimator'.

one of the toughest things to get right is good interactive feel during
heavy system load. While playing with various scheduler variants i found
that the best interactive feel is achieved not by 'boosting' interactive
tasks, but by 'punishing' tasks that want to use more CPU time than there
is available. This method is also much easier to do in an O(1) fashion.

to establish the actual 'load' the task contributes to the system, a
complex-looking but pretty accurate method is used: there is a 4-entry
'history' ringbuffer of the task's activities during the last 4 seconds.
This ringbuffer is operated without much overhead. The entries tell the
scheduler a pretty accurate load-history of the task: has it used up more
CPU time or less during the past N seconds. [the size '4' and the interval
of 4x 1 seconds was found by lots of experimentation - this part is
flexible and can be changed in both directions.]

the penalty a task gets for generating more load than the CPU can handle
is a priority decrease - there is a maximum amount to this penalty
relative to their static priority, so even fully CPU-bound tasks will
observe each other's priorities, and will share the CPU accordingly.

I've separated the RT scheduler into a different codebase, while still
keeping some of the scheduling codebase common. This does not look pretty
in certain places such as __sched_tail() or activate_task(), but i dont
think it can be avoided. RT scheduling is different, it uses a global
runqueue (and global spinlock) and it needs global decisions. To make RT
scheduling more instant, i've added a broadcast-reschedule message as
well, to make it absolutely sure that RT tasks of the right priority are
scheduled apropriately, even on SMP systems. The RT-scheduling part is
O(1) as well.

the SMP load-balancer can be extended/switched with additional parallel
computing and cache hierarchy concepts: NUMA scheduling, multi-core CPUs
can be supported easily by changing the load-balancer. Right now it's
tuned for my SMP systems.

i skipped the prev->mm == next->mm advantage - no workload i know of shows
any sensitivity to this. It can be added back by sacrificing O(1)
schedule() [the current and one-lower priority list can be searched for a
that->mm == current->mm condition], but costs a fair number of cycles
during a number of important workloads, so i wanted to avoid this as much
as possible.

- the SMP idle-task startup code was still racy and the new scheduler
triggered this. So i streamlined the idle-setup code a bit. We do not call
into schedule() before all processors have started up fully and all idle
threads are in place.

- the patch also cleans up a number of aspects of sched.c - moves code
into other areas of the kernel where it's appropriate, and simplifies
certain code paths and data constructs. As a result, the new scheduler's
code is smaller than the old one.

(i'm sure there are other details i forgot to explain. I've commented some
of the more important code paths and data constructs. If you think some
aspect of this design is faulty or misses some important issue then please
let me know.)

(the current code is by no means perfect, my main goal right now, besides
fixing bugs is to make the code cleaner. Any suggestions for
simplifications are welcome.)

Benchmarks
==========

i've performed two major groups of benchmarks: first i've verified the
interactive performance (interactive 'feel') of the new scheduler on UP
and SMP systems as well. While this is a pretty subjective thing, i found
that the new scheduler is at least as good as the old one in all areas,
and in a number of high load workloads it feels visibly smoother. I've
tried a number of workloads, such as make -j background compilation or
1000 background processes. Interactive performance can also be verified
via tracing both schedulers, and i've done that and found no areas of
missed wakeups or imperfect SMP scheduling latencies in either of the two
schedulers.

the other group of benchmarks was the actual performance of the scheduler.
I picked the following ones (some were intentionally picked to load the
scheduler, others were picked to make the benchmark spectrum more
complete):

 - compilation benchmarks

 - thr chat-server workload simulator written by Bill Hartner

 - the usual components from the lmbench suite

 - a heavily sched_yield()-ing testcode to measure yield() performance.

 [ i can test any other workload too that anyone would find interesting. ]

i ran these benchmarks on a 1-CPU box using a UP kernel, a 2-CPU and a
8-CPU box as well, using the SMP kernel.

The chat-server simulator creates a number of processes that are connected
to each other via TCP sockets, the processes send messages to each other
randomly, in a way that simulates actual chat server designs and
workloads.

3 successive runs of './chat_c 127.0.0.1 10 1000' produce the following
message throughput:

vanilla-2.5.2-pre6:

 Average throughput : 110619 messages per second
 Average throughput : 107813 messages per second
 Average throughput : 120558 messages per second

O(1)-schedule-2.5.2-pre6:

 Average throughput : 131250 messages per second
 Average throughput : 116333 messages per second
 Average throughput : 179686 messages per second

this is a rougly 20% improvement.

To get all benefits of the new scheduler, i ran it reniced, which in
essence triggers round-robin batch scheduling for the chat server tasks:

3 successive runs of 'nice -n 19 ./chat_c 127.0.0.1 10 1000' produce the
following throughput:

vanilla-2.5.2-pre6:

 Average throughput : 77719 messages per second
 Average throughput : 83460 messages per second
 Average throughput : 90029 messages per second

O(1)-schedule-2.5.2-pre6:

 Average throughput : 609942 messages per second
 Average throughput : 610221 messages per second
 Average throughput : 609570 messages per second

throughput improved by more than 600%. The UP and 2-way SMP tests show a
similar edge for the new scheduler. Furthermore, during these chatserver
tests, the old scheduler doesnt handle interactive tasks very well, and
the system is very jerky. (which is a side-effect of the overscheduling
situation the scheduler gets into.)

the 1-CPU UP numbers are interesting as well:

vanilla-2.5.2-pre6:

 ./chat_c 127.0.0.1 10 100
 Average throughput : 102885 messages per second
 Average throughput : 95319 messages per second
 Average throughput : 99076 messages per second

 nice -n 19 ./chat_c 127.0.0.1 10 1000
 Average throughput : 161205 messages per second
 Average throughput : 151785 messages per second
 Average throughput : 152951 messages per second

O(1)-schedule-2.5.2-pre6:

 ./chat_c 127.0.0.1 10 100 # NEW
 Average throughput : 128865 messages per second
 Average throughput : 115240 messages per second
 Average throughput : 99034 messages per second

 nice -n 19 ./chat_c 127.0.0.1 10 1000 # NEW
 Average throughput : 163112 messages per second
 Average throughput : 163012 messages per second
 Average throughput : 163652 messages per second

this shows that while on UP we dont have the scalability improvements, the
O(1) scheduler is still slightly ahead.


another benchmark measures sched_yield() performance. (which the pthreads
code relies on pretty heavily.)

on a 2-way system, starting 4 instances of ./loop_yield gives the
following context-switch throughput:

vanilla-2.5.2-pre6

  # vmstat 5 | cut -c57-
     system         cpu
   in    cs  us  sy  id
  102 241247   6  94   0
  101 240977   5  95   0
  101 241051   6  94   0
  101 241176   7  93   0

O(1)-schedule-2.5.2-pre6

  # vmstat 5 | cut -c57-
     system         cpu
   in     cs  us  sy  id
  101 977530  31  69   0
  101 977478  28  72   0
  101 977538  27  73   0

the O(1) scheduler is 300% faster, and we do nearly 1 million context
switches per second!

this test is even more interesting on the 8-way system, running 16
instances of loop_yield:

vanilla-2.5.2-pre6:

   vmstat 5 | cut -c57-
      system         cpu
    in     cs  us  sy  id
   106 108498   2  98   0
   101 108333   1  99   0
   102 108437   1  99   0

100K/sec context switches - the overhead of the global runqueue makes the
scheduler slower than the 2-way box!

O(1)-schedule-2.5.2-pre6:

    vmstat 5 | cut -c57-
     system         cpu
    in      cs  us  sy  id
   102 6120358  34  66   0
   101 6117063  33  67   0
   101 6117124  34  66   0

this is more than 6 million context switches per second! (i think this is
a first, no Linux box in existence did so many context switches per second
before.) This is one workload where the per-CPU runqueues and scalability
advantages show up big time.

here are the lat_proc and lat_ctx comparisons (the results quoted here are
the best numbers from a series of tests):

vanilla-2.5.2-pre6:

  ./lat_proc fork
  Process fork+exit: 440.0000 microseconds
  ./lat_proc exec
  Process fork+execve: 491.6364 microseconds
  ./lat_proc shell
  Process fork+/bin/sh -c: 3545.0000 microseconds

O(1)-schedule-2.5.2-pre6:

  ./lat_proc fork
  Process fork+exit: 168.6667 microseconds
  ./lat_proc exec
  Process fork+execve: 279.6500 microseconds
  ./lat_proc shell
  Process fork+/bin/sh -c: 2874.0000 microseconds

the difference is pretty dramatic - it's mostly due to avoiding much of
the COW overhead that comes from fork()+execve(). The fork()+exit()
improvement is mostly due to better CPU affinity - parent and child are
running on the same CPU, while the old scheduler pushes the child to
another, idle CPU, which creates heavy interlocking traffic between the MM
structures.

the compilation benchmarks i ran gave very similar results for both
schedulers. The O(1) scheduler has a small 2% advantage in make -j
benchmarks (not accounting statistical noise - it's hard to produce
reliable compilation benchmarks) - probably due to better SMP affinity
again.

Status
======

i've tested the new scheduler under the aforementioned range of systems
and workloads, but it's still experimental code nevertheless. I've
developed it on SMP systems using the 2.5.2-pre kernels, so it has the
most testing there, but i did a fair number of UP and 2.4.17 tests as
well. NOTE! For the 2.5.2-pre6 kernel to be usable you should apply
Andries' latest 2.5.2pre6-kdev_t patch available at:

	http://www.kernel.org/pub/linux/kernel/people/aeb/

i also tested the RT scheduler for various situations such as
sched_yield()-ing of RT tasks, strace-ing RT tasks and other details, and
it's all working as expected. There might be some rough edges though.

Comments, bug reports, suggestions are welcome,

	Ingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!sn-xit-02!supernews.com!
newsfeed.direct.ca!look.ca!newshub2.rdc1.sfba.home.com!news.home.com!
newshub1-work.rdc1.sfba.home.com!gehenna.pell.portland.or.us!
nntp-server.caltech.edu!nntp-server.caltech.edu!mail2news96
Newsgroups: mlist.linux.kernel
Date: 	Fri, 4 Jan 2002 21:30:18 +1100
From: Anton Blanchard <an...@samba.org>
X-To: Ingo Molnar <mi...@elte.hu>
X-Cc: linux-ker...@vger.kernel.org, Linus Torvalds <torva...@transmeta.com>,
        Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
Message-ID: <linux.kernel.20020104103018.GA22745@krispykreme.SOMEWHERE>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Approved: n...@nntp-server.caltech.edu
Lines: 63


Great stuff Ingo!

> now that new-year's parties are over things are getting boring again. For
> those who want to see and perhaps even try something more complex, i'm
> announcing this patch that is a pretty radical rewrite of the Linux
> scheduler for 2.5.2-pre6:

Good timing :) We were just looking at an application that hit sched_yield
heavily on a large SMP machine (dont have the source so fixing the
symptoms not the cause). Performance was pretty bad with the standard
scheduler.

We managed to get a 4 way ppc64 machine to boot, but unfortunately the
18 way hung somewhere after smp initialisation and before execing
init. More testing to come :)

Is the idle loop optimisation (atomic exchange -1 into need_resched
to avoid IPI) gone forever?

Is my understanding of this right?:

#define BITMAP_SIZE (MAX_RT_PRIO/8+1)

...

char bitmap[BITMAP_SIZE+1];
list_t queue[MAX_RT_PRIO];

You have an bitmap of size MAX_RT_PRIO+1 (actually I think you have one too
many +1 here :) and you zero the last bit to terminate it. You use
find_first_zero_bit to search the entire priority list and
sched_find_first_zero_bit to search only the first MAX_PRIO (64) bits.

> the SMP load-balancer can be extended/switched with additional parallel
> computing and cache hierarchy concepts: NUMA scheduling, multi-core CPUs
> can be supported easily by changing the load-balancer. Right now it's
> tuned for my SMP systems.

It will be interesting to test this on our HMT hardware.

> - the SMP idle-task startup code was still racy and the new scheduler
> triggered this. So i streamlined the idle-setup code a bit. We do not call
> into schedule() before all processors have started up fully and all idle
> threads are in place.

I like this cleanup, it pushes more stuff out the arch specific code
into init_idle().

> another benchmark measures sched_yield() performance. (which the pthreads
> code relies on pretty heavily.)

Can you send me this benchmark and I'll get some more results :)

I dont think pthreads uses sched_yield on spinlocks any more, but there
seems to be enough userspace code that does. 

Anton
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!sn-xit-02!supernews.com!
newsfeed.direct.ca!look.ca!newshub2.rdc1.sfba.home.com!news.home.com!
newshub1-work.rdc1.sfba.home.com!gehenna.pell.portland.or.us!
nntp-server.caltech.edu!nntp-server.caltech.edu!mail2news96
Newsgroups: mlist.linux.kernel
Date: 	Fri, 4 Jan 2002 13:53:14 +0100 (CET)
From: Ingo Molnar <mi...@elte.hu>
Reply-To: <mi...@elte.hu>
X-To: Anton Blanchard <an...@samba.org>
X-Cc: linux-kernel <linux-ker...@vger.kernel.org>,
        Linus Torvalds <torva...@transmeta.com>,
        Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
Message-ID: <linux.kernel.Pine.LNX.4.33.0201041338590.2717-100000@localhost.localdomain>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Approved: n...@nntp-server.caltech.edu
Lines: 81


On Fri, 4 Jan 2002, Anton Blanchard wrote:

> Good timing :) We were just looking at an application that hit
> sched_yield heavily on a large SMP machine (dont have the source so
> fixing the symptoms not the cause). Performance was pretty bad with
> the standard scheduler.

hey, great. I mean, what a pity :-| But in any case, sched_yield() is just
a tiny portion of the scheduler spectrum, and it's certainly not the most
important one.

> We managed to get a 4 way ppc64 machine to boot, but unfortunately the
> 18 way hung somewhere after smp initialisation and before execing
> init. More testing to come :)

could this be the child-runs-first problem perhaps? You can disable it by
exchanging wake_up_forked_process() for wake_up_process() in fork.c, and
removing the current->need_resched = 1 line.

> Is the idle loop optimisation (atomic exchange -1 into need_resched to
> avoid IPI) gone forever?

it's just broken temporarily, i will fix it. The two places the need to
set need_resched is the timer interrupt (that triggers periodic
load_balance()) and the wakeup code, i'll fix this in the next patch.

> Is my understanding of this right?:
>
> #define BITMAP_SIZE (MAX_RT_PRIO/8+1)
>
> ...
>
> char bitmap[BITMAP_SIZE+1];
> list_t queue[MAX_RT_PRIO];
>
> You have an bitmap of size MAX_RT_PRIO+1 (actually I think you have
> one too many +1 here :) [...]

[ yes :-) paranoia. will fix it. ]

>           [...] and you zero the last bit to terminate it. You
> use find_first_zero_bit to search the entire priority list and
> sched_find_first_zero_bit to search only the first MAX_PRIO (64) bits.

correct.

the reason for this logic inversion is temporary as well: we'll have to
add find_next_set_bit before doing the more intuitive thing of setting the
bit when the runlist is active. Right now sched_find_first_zero_bit() has
to invert the value (on x86) again to get it right for BSFL.

> > - the SMP idle-task startup code was still racy and the new scheduler
> > triggered this. So i streamlined the idle-setup code a bit. We do not call
> > into schedule() before all processors have started up fully and all idle
> > threads are in place.
>
> I like this cleanup, it pushes more stuff out the arch specific code
> into init_idle().

the new rules are this: no schedule() must be called before all bits in
wait_init_idle are clear. I'd suggest for you to add this to the top of
schedule():

	if (wait_init_idle)
		BUG();

to debug the SMP startup code.

the other new property is that the init thread wakes itself up and then
later on becomes an idle thread just like the other idle threads. This
makes the idle startup code more symmetric, and the scheduler more
debuggable.

	Ingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
newsfeed1.bredband.com!bredband!news.tele.dk!small.news.tele.dk!
129.240.148.23!uio.no!nntp.uio.no!ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Fri, 4 Jan 2002 20:33:27 -0800 (PST)
From: Davide Libenzi <davi...@xmailserver.org>
X-X-Sender: dav...@blue1.dev.mcafeelabs.com
To: Ingo Molnar <mi...@elte.hu>
cc: lkml <linux-ker...@vger.kernel.org>,
        Linus Torvalds <torva...@transmeta.com>,
        Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.33.0201040050440.1363-100000@localhost.localdomain>
Original-Message-ID: <Pine.LNX.4.40.0201041857490.937-100000@blue1.dev.mcafeelabs.com>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sat, 5 Jan 2002 04:31:16 GMT
Message-ID: <fa.ln6qgcv.l2i1ia@ifi.uio.no>
References: <fa.o38vg8v.vlk7p4@ifi.uio.no>
Lines: 232


Ingo, i finally had the time to take a look at the code and i've some
comments. I'll start by saying what i like and it is the sched.c cleanup
from the messy condition that suffered by a long time. Then a question,
why the hell are you trying to achieve O(1) inside the single CPU schedulers ?
Once a smart guy said me that only fools never change opinion so i'm not
going to judge you for this, but do you remember 4 years ago when i posted
the priority queue patch what you did say ?
You said, just in case you do not remember, that the load average over a
single CPU even for high loaded servers is typically lower than 5.
So why the hell create 13456 queues to achieve an O(1) on the lookup code
when 70% of the switch cost is switch-mm ?
Yes, 70% of the cost of a context switch is switch-mm, and this measured
with a cycle counter. Take a look at this if you do not believe :

http://www.xmailserver.org/linux-patches/ltcpu.png

More, you removed the MM affinity test that, i agree that is not always
successful, but even if it's able to correctly predict 50% of the
reschedules, it'll be able to save you more then O(1) pickup on a 1..3
runqueue length. Once you've split the queue and removed the common lock
you reduced _a_lot_ the scheduler cost for the SMP case and inside the
local CPU you don't need O(1) pickups.
Why ?
Look at how many people suffered the current scheduler performances for
the UP case. Noone. People posted patches to get O(1) pickups ( yep, even
me ), guys  tried them believing that these would have solved their
problems and noone requested a merge because the current scheduler
architecture is just OK for the uniprocessor case.
Lets come at the code. Have you ever tried to measure the context switch
times for standard tasks when there's an RT tasks running ?
Every time an RT run, every task on other CPUs will try to pickup a task
inside the RT queue :

    if (unlikely(rt_array.nr_active))
        if (rt_schedule())
            goto out_return;

and more, inside the rt_schedule() you're going to get the full lock set :

    lock_rt();
    ...

static void lock_rt(void)
{
    int i;

    __cli();
    for (i = 0; i < smp_num_cpus; i++)
        spin_lock(&cpu_rq(i)->lock);
    spin_lock(&rt_lock);
}

with disabled interrupts. More the wakeup code for RT task is, let's say,
at least sub-optimal :

        if (unlikely(rt_task(p))) {
            spin_lock(&rt_lock);
            enqueue_task(p, &rt_array);

>>          smp_send_reschedule_all();

            current->need_resched = 1;
            spin_unlock(&rt_lock);
        }

You basically broadcast IPIs with all other CPUs falling down to get the
whole lock set. I let you imagine what happens. The load estimator is both
complex, expensive ( mult, divs, mods ) and does not work :

static inline void update_sleep_avg_deactivate(task_t *p)
{
    unsigned int idx;
    unsigned long j = jiffies, last_sample = p->run_timestamp / HZ,
        curr_sample = j / HZ, delta = curr_sample - last_sample;

    if (delta) {
        if (delta < SLEEP_HIST_SIZE) {
            for (idx = 0; idx < delta; idx++) {
                p->sleep_idx++;
                p->sleep_idx %= SLEEP_HIST_SIZE;
                p->sleep_hist[p->sleep_idx] = 0;
            }
        } else {
            for (idx = 0; idx < SLEEP_HIST_SIZE; idx++)
                p->sleep_hist[idx] = 0;
            p->sleep_idx = 0;
        }
    }
    p->sleep_timestamp = j;
}

If you scale down to seconds with /HZ, delta will be 99.99% of cases zero.
How much do you think a task will run 1-3 seconds ?!?
You basically shortened the schedule() path by adding more code on the
wakeup() path. This :

static inline unsigned int update_sleep_avg_activate(task_t *p)
{
    unsigned int idx;
    unsigned long j = jiffies, last_sample = p->sleep_timestamp / HZ,
        curr_sample = j / HZ, delta = curr_sample - last_sample,
        delta_ticks, sum = 0;

    if (delta) {
        if (delta < SLEEP_HIST_SIZE) {
            p->sleep_hist[p->sleep_idx] += HZ - (p->sleep_timestamp % HZ);
            p->sleep_idx++;
            p->sleep_idx %= SLEEP_HIST_SIZE;

            for (idx = 1; idx < delta; idx++) {
                p->sleep_idx++;
                p->sleep_idx %= SLEEP_HIST_SIZE;
                p->sleep_hist[p->sleep_idx] = HZ;
            }
        } else {
            for (idx = 0; idx < SLEEP_HIST_SIZE; idx++)
                p->sleep_hist[idx] = HZ;
            p->sleep_idx = 0;
        }
        p->sleep_hist[p->sleep_idx] = 0;
        delta_ticks = j % HZ;
    } else
        delta_ticks = j - p->sleep_timestamp;
    p->sleep_hist[p->sleep_idx] += delta_ticks;
    p->run_timestamp = j;

    for (idx = 0; idx < SLEEP_HIST_SIZE; idx++)
        sum += p->sleep_hist[idx];

    return sum * HZ / ((SLEEP_HIST_SIZE-1)*HZ + (j % HZ));
}

plus this :

#define MAX_BOOST (MAX_PRIO/3)
    sleep = update_sleep_avg_activate(p);
    load = HZ - sleep;
    penalty = (MAX_BOOST * load)/HZ;
    if (penalty) {
        p->prio = NICE_TO_PRIO(p->__nice) + penalty;
        if (p->prio < 0)
            p->prio = 0;
        if (p->prio > MAX_PRIO-1)
            p->prio = MAX_PRIO-1;
    }

cannot be defined a short path inside wakeup().
Even the expire task, that is called at HZ frequency is not that short :

void expire_task(task_t *p)
{
    runqueue_t *rq = this_rq();
    unsigned long flags;

    if (rt_task(p)) {
        if ((p->policy == SCHED_RR) && p->time_slice) {
            spin_lock_irqsave(&rq->lock, flags);
            spin_lock(&rt_lock);
            if (!--p->time_slice) {
                /*
                 * RR tasks get put to the end of the
                 * runqueue when their timeslice expires.
                 */
                dequeue_task(p, &rt_array);
                enqueue_task(p, &rt_array);
                p->time_slice = RT_PRIO_TO_TIMESLICE(p->prio);
                p->need_resched = 1;
            }
            spin_unlock(&rt_lock);
            spin_unlock_irqrestore(&rq->lock, flags);
        }
        return;
    }
    if (p->array != rq->active) {
        p->need_resched = 1;
        return;
    }
    /*
     * The task cannot change CPUs because it's the current task.
     */
    spin_lock_irqsave(&rq->lock, flags);
    if (!--p->time_slice) {
        p->need_resched = 1;
        p->time_slice = PRIO_TO_TIMESLICE(p->prio);

        /*
         * Timeslice used up - discard any possible
         * priority penalty:
         */
        dequeue_task(p, rq->active);
        enqueue_task(p, rq->expired);
    } else {
        /*
         * Deactivate + activate the task so that the
         * load estimator gets updated properly:
         */
        deactivate_task(p, rq);
        activate_task(p, rq);
    }
    load_balance(rq);
    spin_unlock_irqrestore(&rq->lock, flags);
}

with a runqueue balance done at every timer tick.
Another thing that i'd like to let you note is that with the current
2.5.2-preX scheduler the recalculation loop is no more a problem
because it's done only for running tasks and the lock switch between
task_list and runqueue has been removed. This is just fine because if you
consider the worst case for the recalc loop frequency is one CPU bound
task running. In this case the frequency is about 1/TS ~= 20 and it's done
only for one task. If you've two CPU bound tasks runnning the frequency
will become about 1/(2 * TS) ~= 10 ( recalc two tasks ). With the modified
2.5.2-preX has been cut _a_lot_ the recalculation loop cost and, while
before on my machine i was able to clearly distinguish recalc loop
LatSched samples ( ~= 40000 cycles with about 120 processes ), now the
context switch cost is uniform over the time ( no random peaks to 40000 cycles ).
Anyway, it's the balancing code that will make the _real_world_ tests
difference on a multi queue scheduler, not O(1) pickups.




- Davide



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!news.tele.dk!
small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!ifi.uio.no!
internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sat, 5 Jan 2002 21:24:08 +0100 (CET)
From: Ingo Molnar <mi...@elte.hu>
Reply-To: <mi...@elte.hu>
To: Davide Libenzi <davi...@xmailserver.org>
Cc: lkml <linux-ker...@vger.kernel.org>,
        Linus Torvalds <torva...@transmeta.com>,
        Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.40.0201041857490.937-100000@blue1.dev.mcafeelabs.com>
Original-Message-ID: <Pine.LNX.4.33.0201051232020.2542-100000@localhost.localdomain>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sat, 5 Jan 2002 18:29:12 GMT
Message-ID: <fa.o293f8v.slg6p6@ifi.uio.no>
References: <fa.ln6qgcv.l2i1ia@ifi.uio.no>
Lines: 163


On Fri, 4 Jan 2002, Davide Libenzi wrote:

> [...] but do you remember 4 years ago when i posted the priority queue
> patch what you did say ? You said, just in case you do not remember,
> that the load average over a single CPU even for high loaded servers
> is typically lower than 5.

yep, this might have been the case 4 years ago :) But i'd not be ashamed
to change my opinion even if i had said it just a few months ago.

Today we have things like the slashdot effect that will break down
otherwise healthy and well-sized systems. People have started working
around things by designing servers that run only a limited number of
processes, but the fact is, why should we restrict application maker's
choice of design, especially if they are often interested in robustness
more than in the last bit of performance. There are a fair number of
real-world application servers that start to suffer under Linux if put
under realistic load.

> Yes, 70% of the cost of a context switch is switch-mm, and this
> measured with a cycle counter. [...]

two-process context switch times under the O(1) scheduler did not get
slower.

> Take a look at this if you do not believe :

In fact it's the cr3 switch (movl %0, %%cr3) that accounts for about 30%
of the context switch cost. On x86. On other architectures it's often
much, much cheaper.

> More, you removed the MM affinity test that, i agree that is not
> always successful, but even if it's able to correctly predict 50% of
> the reschedules, [...]

you must be living on a different planet if you say that the p->mm test in
goodness() matters in 50% of the reschedules. In my traces it was more
like 1% of the cases or less, so i removed it as insignificant factor
causing significant complexity. Please quote the specific workload/test i
should try, where you think the mm test matters.

> Lets come at the code. Have you ever tried to measure the context
> switch times for standard tasks when there's an RT tasks running ?

the current RT scheduling scheme was a conscious choice. *If* a RT task is
running (which just doesnt happen in 99.999% of the time) then scalability
becomes much less important, and scheduling latency is the only and
commanding factor.

> You basically broadcast IPIs with all other CPUs falling down to get
> the whole lock set. [...]

oh yes. *If* a RT task becomes runnable then i *want* all other CPUs drop
all the work they have as soon as possible and go try to run that
over-important RT task. It doesnt matter whether it's SMP-affine, and it
doesnt matter whether it's scalable. RT is about latency of action,
nothing else. RT is about a robot arm having to stop in 100 msecs or the
product gets physically damaged.

this method of 'global RT tasks' has the following practical advantage: it
reduces the statistical scheduling latency of RT tasks better than any
other solution, because the scheduler *cannot* know in advance which CPU
will be able to get the RT task first. Think about it, what if the CPU,
that appears to be the least busy for the scheduler, happens to be
spinning within some critical section? The best method is to let every CPU
go into the scheduler as fast as possible, and let the fastest one win.

George Anzinger @ Montavista has suggested the following extension to this
concept: besides having such global RT tasks, for some RT usages it makes
sense to have per-CPU 'affine' RT tasks. I think that makes alot of sense
as well, because if you care more about scalability than latencies, then
you can still flag your process to be 'CPU affine RT task', which wont be
included in the global queue, and thus you wont see the global locking
overhead and 'CPUs racing to run RT tasks'. I have reserved some priority
bitspace for such purposes.

> static inline void update_sleep_avg_deactivate(task_t *p)
> {
>     unsigned int idx;
>     unsigned long j = jiffies, last_sample = p->run_timestamp / HZ,
>         curr_sample = j / HZ, delta = curr_sample - last_sample;
>
>     if (delta) {
[...]

> If you scale down to seconds with /HZ, delta will be 99.99% of cases
> zero. How much do you think a task will run 1-3 seconds ?!?

i have to say that you apparently do not understand the code. Please check
it out again, it does not do what you think it does. The code measures the
portion of time the task spends in the runqueue. Eg. a fully CPU-bound
task spends 100% of its time on the runqueue. A task that is sleeping
spends 0% of its time on the runqueue. A task that does some real work and
goes on and off the runqueue will have a 'runqueue percentage value'
somewhere between the two. The load estimator in the O(1) scheduler
measures this value based on a 4-entry, 4 seconds 'runqueue history'. The
scheduler uses this percentage value to decide whether a task is
'interactive' or a 'CPU hog'. But you do not have to take my word for it,
check out the code and prove me wrong.

> You basically shortened the schedule() path by adding more code on the
> wakeup() path. This :

would you mind proving this claim with hard numbers? I *have* put up hard
numbers, but i dont see measurements in your post. I was very careful
about not making wakeup() more expensive at the expense of schedule(). The
load estimator was written and tuned carefully, and eg. on UP, switching
just 2 tasks (lat_ctx -s 0 2), the O(1) scheduler is as fast as the
2.5.2-pre6/pre7 scheduler. It also brings additional benefits of improved
interactiveness detection, which i'd be willing to pay the price for even
if it made scheduling slightly more expensive. (which it doesnt.)

> with a runqueue balance done at every timer tick.

please prove that this has any measurable performance impact. Right now
the load-balancer is a bit over-eager trying to balance work - in the -A2
patch i've relaxed this a bit.

if you check out the code then you'll see that the load-balancer has been
designed to be scalable and SMP-friendly as well: ie. it does not lock
other runqueues while it checks the statistics, so it does not create
interlocking traffic. It only goes and locks other runqueues *if* it finds
an imbalance. Ie., under this design, it's perfectly OK to run the load
balancer 100 times a second, because there wont be much overhead unless
there is heavy runqueue activity.

(in my current codebase i've changed the load-balancer to be called with a
maximum frequency of 100 - eg. if HZ is set to 1000 then we still call the
load balancer only 100 times a second.)

> Another thing that i'd like to let you note is that with the current
> 2.5.2-preX scheduler the recalculation loop is no more a problem
> because it's done only for running tasks and the lock switch between
> task_list and runqueue has been removed. [...]

all the comparisons i did and descriptions i made were based on the
current 2.5.2-pre6/pre7 scheduler. The benchmarks i did were against
2.5.2-pre6 that has the latest code. But to make the comparison more fair,
i'd like to ask you to compare your multiqueue scheduler patch against the
O(1) scheduler, and post your results here.

and yes, the goodness() loop does matter very much. I have fixed this, and
unless you can point out practical disadvantages i dont understand your
point.

fact is, scheduler design does matter when things are turning for the
worse, when your website gets a sudden spike of hits and you still want to
use the system's resources to handle (and thus decrease) the load and not
waste it on things like goodness loops which will only make the situation
worse. There is no system that cannot be overloaded, but there is a big
difference between handling overloads gracefully and trying to cope with
load spikes or falling flat on our face spending 90% of our time in the
scheduler. This is why i think it's a good idea to have an O(1)
scheduler.

	Ingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
logbridge.uoregon.edu!newsfeeds.belnet.be!news.belnet.be!
news.tele.dk!small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!
ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sat, 5 Jan 2002 15:04:27 -0800 (PST)
From: Davide Libenzi <davi...@xmailserver.org>
X-X-Sender: dav...@blue1.dev.mcafeelabs.com
To: Ingo Molnar <mi...@elte.hu>
cc: lkml <linux-ker...@vger.kernel.org>,
        Linus Torvalds <torva...@transmeta.com>,
        Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.33.0201051232020.2542-100000@localhost.localdomain>
Original-Message-ID: <Pine.LNX.4.40.0201051242080.1607-100000@blue1.dev.mcafeelabs.com>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sat, 5 Jan 2002 23:02:05 GMT
Message-ID: <fa.ltqonqv.u78bov@ifi.uio.no>
References: <fa.o293f8v.slg6p6@ifi.uio.no>
Lines: 350

On Sat, 5 Jan 2002, Ingo Molnar wrote:

>
> On Fri, 4 Jan 2002, Davide Libenzi wrote:
>
> > [...] but do you remember 4 years ago when i posted the priority queue
> > patch what you did say ? You said, just in case you do not remember,
> > that the load average over a single CPU even for high loaded servers
> > is typically lower than 5.
>
> yep, this might have been the case 4 years ago :) But i'd not be ashamed
> to change my opinion even if i had said it just a few months ago.
>
> Today we have things like the slashdot effect that will break down
> otherwise healthy and well-sized systems. People have started working
> around things by designing servers that run only a limited number of
> processes, but the fact is, why should we restrict application maker's
> choice of design, especially if they are often interested in robustness
> more than in the last bit of performance. There are a fair number of
> real-world application servers that start to suffer under Linux if put
> under realistic load.

No Ingo the fact that you coded the patch this time does not really change
the workloads, once you've a per-cpu run queue and lock. The thing that
makes big servers to suffer in the common queue plus the cache coherency
traffic due the common lock. Look here for an 8 way system :

http://www.xmailserver.org/linux-patches/lats8-latsch.png

In even _high_realistic_ workloads on these servers the scheduler impact
is never more than 5-15% and by simply splitting the queue/lock you get a
30 up to 60 time fold ( see picture ), that makes the scheduler impact to
go down to 0.2-0.3%. Why do you need to overoptimize for a 0.2% ?


> > Yes, 70% of the cost of a context switch is switch-mm, and this
> > measured with a cycle counter. [...]
>
> two-process context switch times under the O(1) scheduler did not get
> slower.

No makes the scheduler more complex and with a bigger memory footprint.
It's a classical example of overoptimization that you agreed time ago to
be unnecessary. By having coded the patch this time seems to have changed
your mind :

/*
 *  linux/kernel/sched.c
 *
 *  Kernel scheduler and related syscalls
 *
 *  Copyright (C) 1991, 1992, 2002  Linus Torvalds, Ingo Molnar
 *



> > Take a look at this if you do not believe :
>
> In fact it's the cr3 switch (movl %0, %%cr3) that accounts for about 30%
> of the context switch cost. On x86. On other architectures it's often
> much, much cheaper.

TLB flushes are expensive everywhere, and you know exactly this and if you
used a cycle counter to analyze the scheduler instead of silly benchmarks
you would probably know more about what inside a context switch does matter.
Just to give you an example, the cost of a context switch :

CS = CMC + MMC + N * GLC

with :

CMC = common path cost
MMC = switch MM cost
N   = number of tasks on the run queue
GLC = cost of a goodness() loop

Do you know what is, in my machine, the value of  N  that makes :

CMC + MMC = N * GLC

Well, it's 10-12, that means a runqueue with 10-12 running processes.
What does this mean, it means that is almost useless to optmize for O(1)
because (CMC + MMC) is about one order of magnitude about GLC.
And a run queue of 10 is HUGE for a single CPU, as long a run queue of 80
is again pretty HUGE for an 8 way SMP system.
And you've the history on Linux of the last 10 years that thought you that
such run queue are not realistic.



> > More, you removed the MM affinity test that, i agree that is not
> > always successful, but even if it's able to correctly predict 50% of
> > the reschedules, [...]
>
> you must be living on a different planet if you say that the p->mm test in
> goodness() matters in 50% of the reschedules. In my traces it was more
> like 1% of the cases or less, so i removed it as insignificant factor
> causing significant complexity. Please quote the specific workload/test i
> should try, where you think the mm test matters.

Oh sure you won't see this in benchmarks that but try to run a cycle
sampler with a yield()-test load and move your mouse. Look at the
context switch time when yield() tasks are switching and when the keventd
task get kicked. I'm not saying that the test will be always successful
but getting right even a percent of these switches is sure better than
trying to optimize a cost that is limited by way higher components.




> > Lets come at the code. Have you ever tried to measure the context
> > switch times for standard tasks when there's an RT tasks running ?
>
> the current RT scheduling scheme was a conscious choice. *If* a RT task is
> running (which just doesnt happen in 99.999% of the time) then scalability
> becomes much less important, and scheduling latency is the only and
> commanding factor.
> > You basically broadcast IPIs with all other CPUs falling down to get
> > the whole lock set. [...]
>
> oh yes. *If* a RT task becomes runnable then i *want* all other CPUs drop
> all the work they have as soon as possible and go try to run that
> over-important RT task. It doesnt matter whether it's SMP-affine, and it
> doesnt matter whether it's scalable. RT is about latency of action,
> nothing else. RT is about a robot arm having to stop in 100 msecs or the
> product gets physically damaged.
>
> this method of 'global RT tasks' has the following practical advantage: it
> reduces the statistical scheduling latency of RT tasks better than any
> other solution, because the scheduler *cannot* know in advance which CPU
> will be able to get the RT task first. Think about it, what if the CPU,
> that appears to be the least busy for the scheduler, happens to be
> spinning within some critical section? The best method is to let every CPU
> go into the scheduler as fast as possible, and let the fastest one win.

Ingo your RT code sucks and if you for a single second try to forget that
you coded the patch, you will be able to agree. The full lock acquisition
sucks, the broadcast IPI sucks ( just to not exaggerate ) and the fact
that while an RT task is running and the schedules that happens in the
system will try to pickup inside the RT queue ( trying to acquire the
whole lock set, each time ). Come on Ingo this looks very like the old
wakeup code before the wakeone has been implemented. There is absolutely
no reason that, in an 4/8 way smp system running an rt task on a cpu, the
other 3/7 should suffer for this. Try to open your eyes Ingo, you would
never accepted this code if it was coming from someone else.



> George Anzinger @ Montavista has suggested the following extension to this
> concept: besides having such global RT tasks, for some RT usages it makes
> sense to have per-CPU 'affine' RT tasks. I think that makes alot of sense
> as well, because if you care more about scalability than latencies, then
> you can still flag your process to be 'CPU affine RT task', which wont be
> included in the global queue, and thus you wont see the global locking
> overhead and 'CPUs racing to run RT tasks'. I have reserved some priority
> bitspace for such purposes.

I don't like to infringe patents but this is used inside the Balanced
Multi Queue Scheduler from the very beginning. A flag SCHED_RTLOCAL in
setscheduler() makes the difference.



> > static inline void update_sleep_avg_deactivate(task_t *p)
> > {
> >     unsigned int idx;
> >     unsigned long j = jiffies, last_sample = p->run_timestamp / HZ,
> >         curr_sample = j / HZ, delta = curr_sample - last_sample;
> >
> >     if (delta) {
> [...]
>
> > If you scale down to seconds with /HZ, delta will be 99.99% of cases
> > zero. How much do you think a task will run 1-3 seconds ?!?
>
> i have to say that you apparently do not understand the code. Please check
> it out again, it does not do what you think it does. The code measures the
> portion of time the task spends in the runqueue. Eg. a fully CPU-bound
> task spends 100% of its time on the runqueue. A task that is sleeping
> spends 0% of its time on the runqueue. A task that does some real work and
> goes on and off the runqueue will have a 'runqueue percentage value'
> somewhere between the two. The load estimator in the O(1) scheduler
> measures this value based on a 4-entry, 4 seconds 'runqueue history'. The
> scheduler uses this percentage value to decide whether a task is
> 'interactive' or a 'CPU hog'. But you do not have to take my word for it,
> check out the code and prove me wrong.

I'm sorry but you set  p->run_timestamp  to jiffies when the task is run
and you make  curr_sample = j / HZ when it stops running.
If you divide by HZ you'll obtain seconds. A task typically runs 10ms to
60ms that means that, for example :

p->run_timestamp = N       ( jiffies )
curr_sample      = N+5 max ( jiffies )

with HZ == 100 ( different HZ does not change the picture ).
Now, if you divide by HZ, what do you obtain ?

   unsigned long j = jiffies, last_sample = p->run_timestamp / HZ,
        curr_sample = j / HZ, delta = curr_sample - last_sample;

delta will be ~97% always zero, and the remaining 3% one ( because of HZ
scale crossing ).
Now this is not my code but do you understand your code ?
Do you understand why it's broken ?



> > You basically shortened the schedule() path by adding more code on the
> > wakeup() path. This :
>
> would you mind proving this claim with hard numbers? I *have* put up hard
> numbers, but i dont see measurements in your post. I was very careful
> about not making wakeup() more expensive at the expense of schedule(). The
> load estimator was written and tuned carefully, and eg. on UP, switching
> just 2 tasks (lat_ctx -s 0 2), the O(1) scheduler is as fast as the
> 2.5.2-pre6/pre7 scheduler. It also brings additional benefits of improved
> interactiveness detection, which i'd be willing to pay the price for even
> if it made scheduling slightly more expensive. (which it doesnt.)

Oh, you've very likely seens my posts and they're filled of numbers :

http://www.xmailserver.org/linux-patches/mss.html
http://www.xmailserver.org/linux-patches/mss-2.html

And the study, being based on cycle counter measures is a little bit more
accurate than vmstat. And if you'd have used a cycle counter you'd have
probably seen where the cost of a context switch is and you'd have
probably worked trying to optimize something else instead of a goodness()
loop.



> > with a runqueue balance done at every timer tick.
>
> please prove that this has any measurable performance impact. Right now
> the load-balancer is a bit over-eager trying to balance work - in the -A2
> patch i've relaxed this a bit.
>
> if you check out the code then you'll see that the load-balancer has been
> designed to be scalable and SMP-friendly as well: ie. it does not lock
> other runqueues while it checks the statistics, so it does not create
> interlocking traffic. It only goes and locks other runqueues *if* it finds
> an imbalance. Ie., under this design, it's perfectly OK to run the load
> balancer 100 times a second, because there wont be much overhead unless
> there is heavy runqueue activity.
>
> (in my current codebase i've changed the load-balancer to be called with a
> maximum frequency of 100 - eg. if HZ is set to 1000 then we still call the
> load balancer only 100 times a second.)

Wow, only 100 times, it's pretty good. You're travelling between the CPU
data only 100 times per second probably trashing the running process cache
image, but overall it's pretty good code.
And please do not say me to measure something like this. This is that kind
of overload that you add to a system that will make you wake up one
morning with a bloated system without even being able to understand from
where it comes. You would never have been accepted a patch like this one
Ingo, don't try to sell it as good because you coded it.



> > Another thing that i'd like to let you note is that with the current
> > 2.5.2-preX scheduler the recalculation loop is no more a problem
> > because it's done only for running tasks and the lock switch between
> > task_list and runqueue has been removed. [...]
>
> all the comparisons i did and descriptions i made were based on the
> current 2.5.2-pre6/pre7 scheduler. The benchmarks i did were against
> 2.5.2-pre6 that has the latest code. But to make the comparison more fair,
> i'd like to ask you to compare your multiqueue scheduler patch against the
> O(1) scheduler, and post your results here.
>
> and yes, the goodness() loop does matter very much. I have fixed this, and
> unless you can point out practical disadvantages i dont understand your
> point.

What ? making the scheduler switch 60000000 times per seconds with 1245
tasks on the run queue ?! Please Ingo.
I did not seek this. I seeked this 4 years ago when you ( that did
not code the patch ) were on the other side saying that is useless to
optimize for O(1) because realistic run queue are < 5 / CPU. Do you
remember Ingo ?
Adding a multi level priority is no more than 10 minutes of code inside
BMQS but i thought it was useless and i did not code it. Even a dual queue
+ FIFO pickups :

http://www.xmailserver.org/linux-patches/mss-2.html#dualqueue

can get O(1) w/out code complexity but i did not code it. Two days ago
George Anziger proposed me to implement ( inside BMQS ) __exactly__ the
same thing that you implemented/copied ( and that he did implement before
you in his scheduler ) but i said to George my opinion about that, and George
can confirm this.



> fact is, scheduler design does matter when things are turning for the
> worse, when your website gets a sudden spike of hits and you still want to
> use the system's resources to handle (and thus decrease) the load and not
> waste it on things like goodness loops which will only make the situation
> worse. There is no system that cannot be overloaded, but there is a big
> difference between handling overloads gracefully and trying to cope with
> load spikes or falling flat on our face spending 90% of our time in the
> scheduler. This is why i think it's a good idea to have an O(1)
> scheduler.

Ingo please, this is too much even for a guy that changed his opinion. 90%
of the time spent inside the scheduler. You know what makes me angry ?
It's not that you can sell this stuff to guys that knows how things works
in a kernel, i'm getting angry because i think about guys actually reading
your words and that really think that using an O(1) scheduler can change
their life/system-problems. And they're going to spend their time trying
this stuff.
Again, the history of our UP scheduler thought us that noone has been able
to makes it suffer with realistic/high not-stupid-benchamrks loads.
Look, at the contrary of you ( that seem to want to sell your code ), i
simply want a better Linux scheduler w/out overoptimizations useful only
to grab the attention inside advertising inserts.
I give you credit to have cleaned sched.c but the guidelines for me are :

1) multi queue/lock
2) current dyn_prio/time_slice implementation ( the one in pre8 )
3) current lookup, that becomes simpler inside the CPU local run queue :

    list_for_each(tmp, head) {
        p = list_entry(tmp, struct task_struct, run_list);
        if ((weight = goodness(p, prev->active_mm)) > c)
            c = weight, next = p;
    }

4) current recalc loop logic
5) _definitely_ better RT code

If we agree here we can continue working on the same code base otherwise
i'll go ahead with my work ( well it's more fun than work since my salary
in completely independent from this ).




- Davide



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!news.tele.dk!
small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!ifi.uio.no!
internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
To: davi...@xmailserver.org (Davide Libenzi)
Original-Date: 	Sat, 5 Jan 2002 23:41:38 +0000 (GMT)
Cc: mi...@elte.hu (Ingo Molnar), linux-ker...@vger.kernel.org (lkml),
        torva...@transmeta.com (Linus Torvalds),
        a...@lxorguk.ukuu.org.uk (Alan Cox)
In-Reply-To: 
<Pine.LNX.4.40.0201051242080.1607-100000@blue1.dev.mcafeelabs.com> 
from "Davide Libenzi" at Jan 05, 2002 03:04:27 PM
X-Mailer: ELM [version 2.5 PL6]
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Original-Message-Id: <E16N0RW-0001We-00@the-village.bc.nu>
From: Alan Cox <a...@lxorguk.ukuu.org.uk>
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sat, 5 Jan 2002 23:32:04 GMT
Message-ID: <fa.fvj10vv.nncmh0@ifi.uio.no>
References: <fa.ltqonqv.u78bov@ifi.uio.no>
Lines: 22

> > In fact it's the cr3 switch (movl %0, %%cr3) that accounts for about 30%
> > of the context switch cost. On x86. On other architectures it's often
> > much, much cheaper.
> 
> TLB flushes are expensive everywhere, and you know exactly this and if you

Not every processor is dumb enough to have TLB flush on a context switch.
If you have tags on your tlb/caches it's not a problem.

> Again, the history of our UP scheduler thought us that noone has been able
> to makes it suffer with realistic/high not-stupid-benchamrks loads.

Apache under load, DB2, Postgresql, Lotus domino all show bad behaviour. 
(Whether apache, db2, and postgresql want fixing differently is a seperate
 argument)

Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!sn-xit-02!supernews.com!
news.tele.dk!small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!
ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sat, 5 Jan 2002 15:46:14 -0800 (PST)
From: Davide Libenzi <davi...@xmailserver.org>
X-X-Sender: dav...@blue1.dev.mcafeelabs.com
To: Alan Cox <a...@lxorguk.ukuu.org.uk>
cc: Ingo Molnar <mi...@elte.hu>, lkml <linux-ker...@vger.kernel.org>,
        Linus Torvalds <torva...@transmeta.com>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <E16N0RW-0001We-00@the-village.bc.nu>
Original-Message-ID: <Pine.LNX.4.40.0201051540010.1607-100000@blue1.dev.mcafeelabs.com>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sat, 5 Jan 2002 23:42:58 GMT
Message-ID: <fa.ltqgnqv.u7kbot@ifi.uio.no>
References: <fa.fvj10vv.nncmh0@ifi.uio.no>
Lines: 40

On Sat, 5 Jan 2002, Alan Cox wrote:

> > > In fact it's the cr3 switch (movl %0, %%cr3) that accounts for about 30%
> > > of the context switch cost. On x86. On other architectures it's often
> > > much, much cheaper.
> >
> > TLB flushes are expensive everywhere, and you know exactly this and if you
>
> Not every processor is dumb enough to have TLB flush on a context switch.
> If you have tags on your tlb/caches it's not a problem.

Yep true, 20% of CPUs are dumb but this 20% of CPUs run 95% of the Linux world.



> > Again, the history of our UP scheduler thought us that noone has been able
> > to makes it suffer with realistic/high not-stupid-benchamrks loads.
>
> Apache under load, DB2, Postgresql, Lotus domino all show bad behaviour.
> (Whether apache, db2, and postgresql want fixing differently is a seperate
>  argument)

Alan, near to my box i've a dual cpu appliance that is running an mta that
uses pthread and that is routing ( under test ) about 600000 messages / hour
Its rq load is about 9-10 and the cs is about 8000-9000.
You know what is the scheduler weight, barely 8-10% with the the 2.4.17
scheduler. It drops to nothing using BMQS w/out any O(1) mirage.




- Davide



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!news.tele.dk!
small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
X-Authentication-Warning: penguin.transmeta.com: torvalds owned process doing -bs
Original-Date: 	Sat, 5 Jan 2002 16:47:58 -0800 (PST)
From: Linus Torvalds <torva...@transmeta.com>
To: Davide Libenzi <davi...@xmailserver.org>
cc: Ingo Molnar <mi...@elte.hu>, lkml <linux-ker...@vger.kernel.org>,
        Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.40.0201051242080.1607-100000@blue1.dev.mcafeelabs.com>
Original-Message-ID: <Pine.LNX.4.33.0201051643050.24370-100000@penguin.transmeta.com>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 00:52:19 GMT
Message-ID: <fa.n6lf6dv.engggb@ifi.uio.no>
References: <fa.ltqonqv.u78bov@ifi.uio.no>
Lines: 31


On Sat, 5 Jan 2002, Davide Libenzi wrote:
>
> No Ingo the fact that you coded the patch this time does not really change
> the workloads, once you've a per-cpu run queue and lock. The thing that
> makes big servers to suffer in the common queue plus the cache coherency
> traffic due the common lock.

What do the per-CPU queue patches look like? I agree with Davide that it
seems much more sensible from a scalability standpoint to allow O(n)  (or
whatever) but with local queues. That should also very naturally give CPU
affinity ;)

The problem with local queues is how to sanely break the CPU affinity
when it needs breaking. Which is not necessarily all that often, but
clearly does need to happen.

It would be nice to have the notion of a cluster scheduler with a clear
"transfer to another CPU" operation, and actively trying to minimize the
number of transfers.

What are those algorithms like? This are must have tons of scientific
papers.

		Linus

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!
newsfeed.media.kyoto-u.ac.jp!uio.no!nntp.uio.no!ifi.uio.no!
internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sun, 6 Jan 2002 03:49:26 +0100 (CET)
From: Ingo Molnar <mi...@elte.hu>
Reply-To: <mi...@elte.hu>
To: Davide Libenzi <davi...@xmailserver.org>
Cc: lkml <linux-ker...@vger.kernel.org>,
        Linus Torvalds <torva...@transmeta.com>,
        Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.40.0201051242080.1607-100000@blue1.dev.mcafeelabs.com>
Original-Message-ID: <Pine.LNX.4.33.0201060227500.2889-100000@localhost.localdomain>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 00:53:46 GMT
Message-ID: <fa.o495g0v.ule5hb@ifi.uio.no>
References: <fa.ltqonqv.u78bov@ifi.uio.no>
Lines: 164


On Sat, 5 Jan 2002, Davide Libenzi wrote:

> In even _high_realistic_ workloads on these servers the scheduler
> impact is never more than 5-15% and by simply splitting the queue/lock
> you get a 30 up to 60 time fold ( see picture ), that makes the
> scheduler impact to go down to 0.2-0.3%. Why do you need to
> overoptimize for a 0.2% ?

the chat simulator simulates Lotus workload pretty well. There are lots of
other workloads that create similar situations. Davide, i honestly think
that you are sticking your head into the sand if you say that many
processes on the runqueue do not happen or do not matter. I used to say
exactly what you say now, and i've seen enough bad scheduling behavior to
have started this O(1) project.

> > In fact it's the cr3 switch (movl %0, %%cr3) that accounts for about 30%
> > of the context switch cost. On x86. On other architectures it's often
> > much, much cheaper.
>
> TLB flushes are expensive everywhere, [...]

you dont know what you are talking about. Most modern CPUs dont need a TLB
flush on context switch, they have tagged caches. And yes, some day we
might even see x86-compatible CPUs that have sane internals.

> and you know exactly this and if you used a cycle counter to analyze
> the scheduler instead of silly benchmarks you would probably know more
> about what inside a context switch does matter.

How do you explain this then:

2.5.2-pre9 vanilla:

 [mingo@a l]$ while true; do ./lat_ctx -s 0 2 2>&1 | grep ^2; done
 2 1.64
 2 1.62

2.5.2-pre9 -B1 O(1) scheduler patch:

 [mingo@a l]$ while true; do ./lat_ctx -s 0 2 2>&1 | grep ^2; done
 2 1.57
 2 1.55

ie. UP, 2-task context switch times became slightly faster. By your
argument it should be significantly slower or worse.

with 10 CPU-intensive tasks running the background, 2.5.2-pre9 + B1 O(1)
patch:

 [root@a l]# nice -n -100 ./lat_ctx -s 0 2
 2 1.57

(ie. the very same context-switch performance - not unexpected from an
O(1) scheduler). While the vanilla scheduler:

 [root@a l]# nice -n -100 ./lat_ctx -s 0 2
 2 2.20

ie. 33% slower already. With 50 background tasks, it's:

 [root@a l]# nice -n -100 ./lat_ctx -s 0 2
 2 17.49

ie. it's more than 10 times slower already. And the O(1) still goes steady
1.57 microseconds.

this is called an exponential breakdown in system characteristics.

If you say that i do not care about and count every cycle spent in the
scheduler then you do not know me.

> Ingo your RT code sucks [.. 10 more lines of patch-bashing snipped ..]

please answer the technical issues i raised. I've taken a quick peek at
your patch, and you do not appear to solve the IPI asynchronity problem i
raised. Ie. your scheduler might end up running the wrong RT tasks on the
wrong CPUs.

> I'm sorry but you set p->run_timestamp to jiffies when the task is run
> and you make curr_sample = j / HZ when it stops running. If you divide
> by HZ you'll obtain seconds. [...]
[...]
> Now, if you divide by HZ, what do you obtain ?
[...]
> delta will be ~97% always zero, and the remaining 3% one ( because of
> HZ scale crossing ). Now this is not my code but do you understand
> your code ? Do you understand why it's broken ?

i obtain a 'history slot index' to the history array via dividing by HZ.
The real statistics is jiffies-accurate, of course. The 'load history' of
the process is stored in a 4-entry (can be redefined) history array, which
is cycled around so that we have a history. The code in essence does an
integral of load over time, and for that one has to use some sort of
memory buffer.

I'd again like to ask you to read and understand the code, not bash it
mindlessly.

> > (in my current codebase i've changed the load-balancer to be called with a
> > maximum frequency of 100 - eg. if HZ is set to 1000 then we still call the
> > load balancer only 100 times a second.)
>
> Wow, only 100 times, it's pretty good. You're travelling between the
> CPU data only 100 times per second probably trashing the running
> process cache image, but overall it's pretty good code.

Davide, please. If you take a look at the code then you'll notice that the
prime target for load-balancing are expired processes, ie. processes which
wont run on that CPU for some time. And if you had bothered to try my
patch then you'd have seen that CPU-bound tasks do not get misbalanced.
The other reason to call the load balancer rather frequently is to
distribute CPU-bound tasks between CPUs fairly.

but this shouldnt matter in your world too much, since long runqueues do
not exist or matter, right? ;-)

> And please do not say me to measure something like this. [...]

i for one will keep measuring things, the proof of the pudding is eating.

> Adding a multi level priority is no more than 10 minutes of code
> inside BMQS but i thought it was useless and i did not code it. Even a
> dual queue + FIFO pickups :
>
> http://www.xmailserver.org/linux-patches/mss-2.html#dualqueue

what i do is something pretty different. The dual array i use is to gather
tasks that have expired timeslices. The goal is to distribute timeslices
fairly, ie. a task that has an expired timeslice cannot run until all
tasks expire their timeslices in the active array. All tasks of all
priorities.

just for the record, i had this idea years ago (in '96 or '97) but never
implemented it past some very embryonic state (which i sent to DaveM and
Linus [perhaps Alan too] as a matter of fact - back then i called it the
'matrix scheduler'), until i faced the chat simulator workload recently.
That workload the current scheduler (or simple multiqueue schedulers)
mishandles badly. Please give me the URL that points to a patch that does
the same thing to solve the fair timeslice distribution problem.

> [...] You know what makes me angry ? It's not that you can sell this
> stuff to guys that knows how things works in a kernel, i'm getting
> angry because i think about guys actually reading your words and that
> really think that using an O(1) scheduler can change their
> life/system-problems. And they're going to spend their time trying
> this stuff. [...]

Will the O(1) patch matter for systems that have a 0-length runqueue most
of the time? Not the least. [the SMP affinity improvements might help
though, but similar things are done in other patches as well.] But this
scheduler was the first one i ever saw producing a sane results for the
chat simulator, and while my patch might not be merged into the mainline,
it will always be an option for those 'hopeless' workloads, or university
servers where hundreds of CPU-bound tasks are executing on the same poor
overloaded box.

	Ingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!sn-xit-02!supernews.com!
news.tele.dk!small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!
ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sun, 6 Jan 2002 03:57:41 +0100 (CET)
From: Ingo Molnar <mi...@elte.hu>
Reply-To: <mi...@elte.hu>
To: Linus Torvalds <torva...@transmeta.com>
Cc: Davide Libenzi <davi...@xmailserver.org>,
        lkml <linux-ker...@vger.kernel.org>,
        Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.33.0201051643050.24370-100000@penguin.transmeta.com>
Original-Message-ID: <Pine.LNX.4.33.0201060349380.3424-100000@localhost.localdomain>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 01:01:48 GMT
Message-ID: <fa.o3pff0v.t546h8@ifi.uio.no>
References: <fa.n6lf6dv.engggb@ifi.uio.no>
Lines: 55


On Sat, 5 Jan 2002, Linus Torvalds wrote:

> What do the per-CPU queue patches look like? I agree with Davide that
> it seems much more sensible from a scalability standpoint to allow
> O(n)  (or whatever) but with local queues. That should also very
> naturally give CPU affinity ;)

(if Davide's post gave you the impression that my patch doesnt do per-CPU
queues then i'd like to point out that it does so. Per-CPU runqueues are a
must for good performance on 8-way boxes and bigger. It's just the actual
implementation of the 'per CPU queue' that is O(1).)

> The problem with local queues is how to sanely break the CPU affinity
> when it needs breaking. Which is not necessarily all that often, but
> clearly does need to happen.

yes. The rule i did is this: 'if the runqueue of another CPU is longer by
more than 10% than our own runqueue then we pull over 5% of tasks'.

we also need to break affinity to get basic fairness - otherwise we might
end up running 10 CPU-bound tasks on CPU1 each using 10% of CPU time, and
a single CPU-bound task on CPU2, using up 100% of CPU time.

> It would be nice to have the notion of a cluster scheduler with a
> clear "transfer to another CPU" operation, and actively trying to
> minimize the number of transfers.

i'm doing this in essence in the set_cpus_allowed() function, it transfers
a task from one CPU's queue to another one.

there are two task migration types:

- a CPU 'pulls' tasks. My patch does this whenever a CPU goes idle, and
  does it periodically from the timer tick. This is the load_balance()
  code. It looks at runqueue lengths and tries to pick a task from another
  CPU that wont have a chance to run there in the near future (and would
  thus lose cache footprint anyway). The load balancer is what should be
  made aware of the caching hierarchy, NUMA, hyperthreading and the rest.

- a CPU might 'push' tasks. Right now my patch does this only when a
  process becomes 'illegal' on a CPU, ie. the cpus_allowed bitmask
  excludes the currently running CPU.

the migration itself needs a slightly more complex locking, but it's
relatively straightforward. Otherwise the per-CPU runqueue locks stay on
those CPUs nicely and scalability is very good.

	Ingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!news.tele.dk!
small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!ifi.uio.no!
internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
X-Authentication-Warning: penguin.transmeta.com: torvalds owned process doing -bs
Original-Date: 	Sat, 5 Jan 2002 17:27:33 -0800 (PST)
From: Linus Torvalds <torva...@transmeta.com>
To: Ingo Molnar <mi...@elte.hu>
cc: Davide Libenzi <davi...@xmailserver.org>,
        lkml <linux-ker...@vger.kernel.org>,
        Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.33.0201060349380.3424-100000@localhost.localdomain>
Original-Message-ID: <Pine.LNX.4.33.0201051722540.24370-100000@penguin.transmeta.com>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 01:30:01 GMT
Message-ID: <fa.n95f5tv.c7gh0a@ifi.uio.no>
References: <fa.o3pff0v.t546h8@ifi.uio.no>
Lines: 33


On Sun, 6 Jan 2002, Ingo Molnar wrote:
>
> (if Davide's post gave you the impression that my patch doesnt do per-CPU
> queues then i'd like to point out that it does so. Per-CPU runqueues are a
> must for good performance on 8-way boxes and bigger. It's just the actual
> implementation of the 'per CPU queue' that is O(1).)

Ahh, ok. No worries then.

At that point I don't think O(1) matters all that much, but it certainly
won't hurt. UNLESS it causes bad choices to be made. Which we can only
guess at right now.

Just out of interest, where have the bugs crept up? I think we could just
try out the thing and see what's up, but I know that at least some
versions of bash are buggy and _will_ show problems due to the "run child
first" behaviour. Remember:  we actually tried that for a while in 2.4.x.

[ In 2.5.x it's fine to break broken programs, though, so this isn't that
  much of an issue any more. From the reports I've seen the thing has
  shown more bugs than that, though. I'd happily test a buggy scheduler,
  but I don't want to mix bio problems _and_ scheduler problems, so I'm
  not ready to switch over until either the scheduler patch looks stable,
  or the bio stuff has finalized more. ]

		Linus

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!sn-xit-02!supernews.com!
news.tele.dk!small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!
ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sat, 5 Jan 2002 17:45:28 -0800 (PST)
From: Davide Libenzi <davi...@xmailserver.org>
X-X-Sender: dav...@blue1.dev.mcafeelabs.com
To: Linus Torvalds <torva...@transmeta.com>
cc: Ingo Molnar <mi...@elte.hu>, lkml <linux-ker...@vger.kernel.org>,
        Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.33.0201051722540.24370-100000@penguin.transmeta.com>
Original-Message-ID: <Pine.LNX.4.40.0201051740220.1607-100000@blue1.dev.mcafeelabs.com>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 01:42:53 GMT
Message-ID: <fa.luqmnqv.v7mbot@ifi.uio.no>
References: <fa.n95f5tv.c7gh0a@ifi.uio.no>
Lines: 38

On Sat, 5 Jan 2002, Linus Torvalds wrote:

>
> On Sun, 6 Jan 2002, Ingo Molnar wrote:
> >
> > (if Davide's post gave you the impression that my patch doesnt do per-CPU
> > queues then i'd like to point out that it does so. Per-CPU runqueues are a
> > must for good performance on 8-way boxes and bigger. It's just the actual
> > implementation of the 'per CPU queue' that is O(1).)
>
> Ahh, ok. No worries then.
>
> At that point I don't think O(1) matters all that much, but it certainly
> won't hurt. UNLESS it causes bad choices to be made. Which we can only
> guess at right now.

You're the men :)
Ok, this stops all the talks Ingo.
Can you send me a link, there're different things to be fixed IMHO.
The load estimator can easily use the current dyn_prio/time_slice by
simplyfing things a _lot_
I would suggest a lower number of queues, 32 is way more than necessary.
The rt code _must_ be better, it can be easily done by a smartest wakeup.
There's no need to acquire the whole lock set, at least w/out a checkpoint
solution ( look at BMQS ) that prevents multiple failing lookups inside
the RT queue.




- Davide


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!news.tele.dk!
small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!ifi.uio.no!
internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sun, 6 Jan 2002 04:41:05 +0100 (CET)
From: Ingo Molnar <mi...@elte.hu>
Reply-To: <mi...@elte.hu>
To: Linus Torvalds <torva...@transmeta.com>
Cc: Davide Libenzi <davi...@xmailserver.org>,
        lkml <linux-ker...@vger.kernel.org>,
        Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.33.0201051722540.24370-100000@penguin.transmeta.com>
Original-Message-ID: <Pine.LNX.4.33.0201060427590.4730-100000@localhost.localdomain>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 01:45:22 GMT
Message-ID: <fa.o59peov.tle792@ifi.uio.no>
References: <fa.n95f5tv.c7gh0a@ifi.uio.no>
Lines: 68


On Sat, 5 Jan 2002, Linus Torvalds wrote:

> At that point I don't think O(1) matters all that much, but it
> certainly won't hurt. UNLESS it causes bad choices to be made. Which
> we can only guess at right now.

i have an escape path for the MM-goodness issue: we can make it
O(nr_running_threads) if need to be, by registering MM users into a per-MM
'MM runqueue', and scanning this runqueue for potentially better
candidates if it's non-empty. In the process case this falls back to O(1),
in the threaded case it would be scanning the whole (local) runqueue in
essence.

And George Anzinger has a nice idea to help those platforms which have
slow bitsearch functions, we can keep a floating pointer of the highest
priority queue which can be made NULL if the last task from a priority
level was used up or can be increased if a higher priority task is added,
this pointer will be correct in most of the time, and we can fall back to
the bitsearch if it's NULL.

so while i think that the O(1) design indeed stretches things a bit and
reduces our moving space, it's i think worth a try. Switching an existing
per-CPU queue design for a full-search design shouldnt be too hard. The
load-balancing parts wont be lost whichever path we chose later on, and
that is the most important bit i think.

> Just out of interest, where have the bugs crept up? I think we could
> just try out the thing and see what's up, but I know that at least
> some versions of bash are buggy and _will_ show problems due to the
> "run child first" behaviour. Remember:  we actually tried that for a
> while in 2.4.x.

i've disabled the 'run child first' behavior in the latest patch at:

        http://redhat.com/~mingo/O(1)-scheduler/sched-O1-2.5.2-B1.patch

the crash bug i've been hunting all day, but i cannot reproduce it which
appears to show it's either something really subtle or something really
stupid. But the patch certainly does not have any of the scarier
strace/ptrace related bug scenarios or 'two CPUs run the same task' bugs,
i've put lots of testing into that part. The only crash remaining in -B1
is a clear NULL pointer dereference in wakeup(), which is a direct bug not
some side-effect, i hope to be able to find it soon.

right now there are a number of very helpful people who are experiencing
those crashes and are ready to run bzImages i compile for them (so that
compiler and build environment is out of the picture) - Pawel Kot for
example. (thanks Pawel!)

> [ In 2.5.x it's fine to break broken programs, though, so this isn't that
>   much of an issue any more. From the reports I've seen the thing has
>   shown more bugs than that, though. I'd happily test a buggy scheduler,
>   but I don't want to mix bio problems _and_ scheduler problems, so I'm
>   not ready to switch over until either the scheduler patch looks stable,
>   or the bio stuff has finalized more. ]

i've disabled it to reduce the number of variables - we can still try and
enable it later on once things have proven to be stable.

	Ingo


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!news.tele.dk!
small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!ifi.uio.no!
internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sat, 5 Jan 2002 18:02:05 -0800 (PST)
From: Davide Libenzi <davi...@xmailserver.org>
X-X-Sender: dav...@blue1.dev.mcafeelabs.com
To: Ingo Molnar <mi...@elte.hu>
cc: Linus Torvalds <torva...@transmeta.com>,
        lkml <linux-ker...@vger.kernel.org>,
        Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.33.0201060427590.4730-100000@localhost.localdomain>
Original-Message-ID: <Pine.LNX.4.40.0201051753090.1607-100000@blue1.dev.mcafeelabs.com>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 01:58:46 GMT
Message-ID: <fa.ltr4o2v.u70bgu@ifi.uio.no>
References: <fa.o59peov.tle792@ifi.uio.no>
Lines: 27

On Sun, 6 Jan 2002, Ingo Molnar wrote:

> And George Anzinger has a nice idea to help those platforms which have
> slow bitsearch functions, we can keep a floating pointer of the highest
> priority queue which can be made NULL if the last task from a priority
> level was used up or can be increased if a higher priority task is added,
> this pointer will be correct in most of the time, and we can fall back to
> the bitsearch if it's NULL.

Ingo, you don't need that many queues, 32 are more than sufficent.
If you look at the distribution you'll see that it matters ( for
interactive feel ) only the very first ( top ) queues, while lower ones
can very easily tollerate a FIFO pickup w/out bad feelings.




- Davide




-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!sn-xit-02!supernews.com!
news.tele.dk!small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!
ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sun, 6 Jan 2002 04:55:18 +0100 (CET)
From: Ingo Molnar <mi...@elte.hu>
Reply-To: <mi...@elte.hu>
To: Davide Libenzi <davi...@xmailserver.org>
Cc: Linus Torvalds <torva...@transmeta.com>,
        lkml <linux-ker...@vger.kernel.org>,
        Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.40.0201051740220.1607-100000@blue1.dev.mcafeelabs.com>
Original-Message-ID: <Pine.LNX.4.33.0201060441540.4730-100000@localhost.localdomain>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 02:00:22 GMT
Message-ID: <fa.o59ff8v.tlk6p4@ifi.uio.no>
References: <fa.luqmnqv.v7mbot@ifi.uio.no>
Lines: 76


On Sat, 5 Jan 2002, Davide Libenzi wrote:

> Can you send me a link, there're different things to be fixed IMHO.

my latest stuff is at:

   http://redhat.com/~mingo/O(1)-scheduler/sched-O1-2.5.2-B1.patch

> The load estimator can easily use the current dyn_prio/time_slice by
> simplyfing things a _lot_

i have experimented with a very high number of variants. I estimated sleep
times, i estimated run times, i estimated runqueue times. Note that the
current estimator measures time spent on the *runqueue*, not time spent on
the CPU. This means that in an overload spike we have an automatically
increasing penalization of tasks that want to run. While i'm not too
emotional about eg. the RT bits, this part of the scheduler is pretty
critical to handle high load smoothly.

the integration effect of the estimator was written to be fast, and it's
fast. Also note that in most of the time we do not even call the
estimator:

        if (p->run_timestamp == jiffies)
                goto enqueue;

ie. in high frequency wakeup situations we'll call into the estimator only
once every jiffy.

> I would suggest a lower number of queues, 32 is way more than necessary.

the reason i used more queues is the 'penalizing' effect of the per-task
load-average estimator. We want to have some priority room these CPU-bound
tasks can escape into, without hurting some of the interactive jobs that
might get a few penalties here and there but still dont reach the maximum
where all the CPU hogs live. (this is p->prio == 63 right now.)

also, i wanted to map all the 39 nice values straight into the priority
space, just to be sure. Some people *might* rely on finegrained priorities
still.

there is one additional thing i wanted to do to reduce the effect of the
64 queues: instead of using a straight doubly-linked list a'la list_t, we
can do a head-pointer that cuts the queue size into half, and reduces
cache footprint of the scheduler data structures as well. But i did not
want to add this until all bugs are fixed, this is an invariant
cache-footprint optimization.

> The rt code _must_ be better, it can be easily done by a smartest
> wakeup. There's no need to acquire the whole lock set, at least w/out
> a checkpoint solution ( look at BMQS ) that prevents multiple failing
> lookups inside the RT queue.

regarding SCHED_OTHER, i have intentionally avoided smart wakeups, pushed
the balancing logic more into the load balancer.

load spikes and big statistical fluctuations of runqueue lengths we should
not care much about - they are spikes we cannot flatten anyway, they can
be gone before the task has finished flushing over its data set to the
other CPU.

regarding RT tasks, i did not want to add something that i know is broken,
even if rt_lock() is arguably heavyweight. I've seen the 'IPI in flight
misses the real picture' situation a number of times and if we want to do
RT scheduling seriously and accurately on SMP then we should give a
perfect solution to it. Would you like me to explain the 'IPI in flight'
problem in detail, or do you agree that it's a problem?

	Ingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
logbridge.uoregon.edu!news.net.uni-c.dk!uninett.no!uio.no!
nntp.uio.no!ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
To: davi...@xmailserver.org (Davide Libenzi)
Original-Date: 	Sun, 6 Jan 2002 02:13:32 +0000 (GMT)
Cc: mi...@elte.hu (Ingo Molnar), torva...@transmeta.com (Linus Torvalds),
        linux-ker...@vger.kernel.org (lkml),
        a...@lxorguk.ukuu.org.uk (Alan Cox)
In-Reply-To: 
<Pine.LNX.4.40.0201051753090.1607-100000@blue1.dev.mcafeelabs.com> 
from "Davide Libenzi" at Jan 05, 2002 06:02:05 PM
X-Mailer: ELM [version 2.5 PL6]
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Original-Message-Id: <E16N2oW-00021c-00@the-village.bc.nu>
From: Alan Cox <a...@lxorguk.ukuu.org.uk>
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 02:03:59 GMT
Message-ID: <fa.fdkn0vv.15kqmh3@ifi.uio.no>
References: <fa.ltr4o2v.u70bgu@ifi.uio.no>
Lines: 14

> Ingo, you don't need that many queues, 32 are more than sufficent.
> If you look at the distribution you'll see that it matters ( for
> interactive feel ) only the very first ( top ) queues, while lower ones
> can very easily tollerate a FIFO pickup w/out bad feelings.

64 queues costs a tiny amount more than 32 queues. If you can get it down
to eight or nine queues with no actual cost (espcially for non realtime queues)
then it represents a huge win since an 8bit ffz can be done by lookup table
and that is fast on all processors
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!news.tele.dk!
small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sun, 6 Jan 2002 05:01:12 +0100 (CET)
From: Ingo Molnar <mi...@elte.hu>
Reply-To: <mi...@elte.hu>
To: Davide Libenzi <davi...@xmailserver.org>
Cc: Linus Torvalds <torva...@transmeta.com>,
        lkml <linux-ker...@vger.kernel.org>,
        Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.40.0201051753090.1607-100000@blue1.dev.mcafeelabs.com>
Original-Message-ID: <Pine.LNX.4.33.0201060455560.4730-100000@localhost.localdomain>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 02:05:29 GMT
Message-ID: <fa.o59jfgv.tlg6h0@ifi.uio.no>
References: <fa.ltr4o2v.u70bgu@ifi.uio.no>
Lines: 23


On Sat, 5 Jan 2002, Davide Libenzi wrote:

> Ingo, you don't need that many queues, 32 are more than sufficent. If
> you look at the distribution you'll see that it matters ( for
> interactive feel ) only the very first ( top ) queues, while lower
> ones can very easily tollerate a FIFO pickup w/out bad feelings.

I have no problem with using 32 queues as long as we keep the code
flexible enough to increase the queue length if needed. I think we should
make it flexible and not restrict ourselves to something like word size.
(with this i'm not suggesting that you meant this, i'm just trying to make
sure.) I saw really good (behavioral, latency, not performance) effects of
the longer queue under high load, but this must be weighed against the
cache footprint of the queues.

	Ingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
logbridge.uoregon.edu!news.net.uni-c.dk!uninett.no!uio.no!
nntp.uio.no!ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sat, 5 Jan 2002 18:12:59 -0800 (PST)
From: Davide Libenzi <davi...@xmailserver.org>
X-X-Sender: dav...@blue1.dev.mcafeelabs.com
To: Alan Cox <a...@lxorguk.ukuu.org.uk>
cc: Ingo Molnar <mi...@elte.hu>, Linus Torvalds <torva...@transmeta.com>,
        lkml <linux-ker...@vger.kernel.org>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <E16N2oW-00021c-00@the-village.bc.nu>
Original-Message-ID: <Pine.LNX.4.40.0201051812080.1607-100000@blue1.dev.mcafeelabs.com>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 02:09:31 GMT
Message-ID: <fa.ltr4n2v.u7sagv@ifi.uio.no>
References: <fa.fdkn0vv.15kqmh3@ifi.uio.no>
Lines: 27

On Sun, 6 Jan 2002, Alan Cox wrote:

> > Ingo, you don't need that many queues, 32 are more than sufficent.
> > If you look at the distribution you'll see that it matters ( for
> > interactive feel ) only the very first ( top ) queues, while lower ones
> > can very easily tollerate a FIFO pickup w/out bad feelings.
>
> 64 queues costs a tiny amount more than 32 queues. If you can get it down
> to eight or nine queues with no actual cost (espcially for non realtime queues)
> then it represents a huge win since an 8bit ffz can be done by lookup table
> and that is fast on all processors

It's here that i want to go, but i'd liketo do it gradually :)

unsigned char first_bit[255];




- Davide


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!news.tele.dk!
small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!ifi.uio.no!
internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
To: davi...@xmailserver.org (Davide Libenzi)
Original-Date: 	Sun, 6 Jan 2002 02:20:47 +0000 (GMT)
Cc: a...@lxorguk.ukuu.org.uk (Alan Cox), mi...@elte.hu (Ingo Molnar),
        torva...@transmeta.com (Linus Torvalds),
        linux-ker...@vger.kernel.org (lkml)
In-Reply-To: 
<Pine.LNX.4.40.0201051812080.1607-100000@blue1.dev.mcafeelabs.com> 
from "Davide Libenzi" at Jan 05, 2002 06:12:59 PM
X-Mailer: ELM [version 2.5 PL6]
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Original-Message-Id: <E16N2vX-00023B-00@the-village.bc.nu>
From: Alan Cox <a...@lxorguk.ukuu.org.uk>
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 02:11:42 GMT
Message-ID: <fa.fej317v.14nal93@ifi.uio.no>
References: <fa.ltr4n2v.u7sagv@ifi.uio.no>
Lines: 13

> > then it represents a huge win since an 8bit ffz can be done by lookup table
> > and that is fast on all processors
> 
> It's here that i want to go, but i'd liketo do it gradually :)
> unsigned char first_bit[255];

Make it [256] and you can do 9 queues since the idle task will always
be queued...
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!news.tele.dk!
small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!ifi.uio.no!
internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sun, 6 Jan 2002 05:07:47 +0100 (CET)
From: Ingo Molnar <mi...@elte.hu>
Reply-To: <mi...@elte.hu>
To: Alan Cox <a...@lxorguk.ukuu.org.uk>
Cc: Davide Libenzi <davi...@xmailserver.org>,
        Linus Torvalds <torva...@transmeta.com>,
        lkml <linux-ker...@vger.kernel.org>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <E16N2oW-00021c-00@the-village.bc.nu>
Original-Message-ID: <Pine.LNX.4.33.0201060501560.5193-100000@localhost.localdomain>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 02:13:00 GMT
Message-ID: <fa.o5p9fov.t5u597@ifi.uio.no>
References: <fa.fdkn0vv.15kqmh3@ifi.uio.no>
Lines: 40


On Sun, 6 Jan 2002, Alan Cox wrote:

> 64 queues costs a tiny amount more than 32 queues. If you can get it
> down to eight or nine queues with no actual cost (espcially for non
> realtime queues) then it represents a huge win since an 8bit ffz can
> be done by lookup table and that is fast on all processors

i'm afraid that while 32 might work, 8 will definitely not be enough. In
the interactivity-detection scheme i added it's important for interactive
tasks to have some room (in terms of priority levels) to go up without
hitting the levels of the true CPU abusers.

we can do 32-bit ffz by doing 4x 8-bit ffz's though:

	if (likely(byte[0]))
		return ffz8[byte[0]];
	else if (byte[1])
		return ffz8[byte[1]];
	else if (byte[2]
		return ffz8[byte[2]];
	else if (byte[3]
		return ffz8[byte[3]];
	else
		return -1;

and while this is still 4 branches, it's better than a loop of 32. But i
also think that George Anzinger's idea works well too to reduce the cost
of bitsearching. Or those platforms that decide to do so could search the
arrray directly as well - if it's 32 queues then it's a cache footprint of
4 cachelines, which can be searched directly without any problem.

	Ingo


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!news.tele.dk!
small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!ifi.uio.no!
internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sun, 6 Jan 2002 05:08:36 +0100 (CET)
From: Ingo Molnar <mi...@elte.hu>
Reply-To: <mi...@elte.hu>
To: Linus Torvalds <torva...@transmeta.com>
Cc: Davide Libenzi <davi...@xmailserver.org>,
        lkml <linux-ker...@vger.kernel.org>,
        Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.33.0201060427590.4730-100000@localhost.localdomain>
Original-Message-ID: <Pine.LNX.4.33.0201060508110.5193-100000@localhost.localdomain>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 02:14:24 GMT
Message-ID: <fa.o3ovfov.v5g59e@ifi.uio.no>
References: <fa.o59peov.tle792@ifi.uio.no>
Lines: 12


doh, the -B1 patch was short by 10k, -B2 is OK:

   http://redhat.com/~mingo/O(1)-scheduler/sched-O1-2.5.2-B2.patch

	Ingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!news.tele.dk!
small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!ifi.uio.no!
internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sat, 5 Jan 2002 18:16:21 -0800 (PST)
From: Davide Libenzi <davi...@xmailserver.org>
X-X-Sender: dav...@blue1.dev.mcafeelabs.com
To: Ingo Molnar <mi...@elte.hu>
cc: Linus Torvalds <torva...@transmeta.com>,
        lkml <linux-ker...@vger.kernel.org>,
        Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.33.0201060441540.4730-100000@localhost.localdomain>
Original-Message-ID: <Pine.LNX.4.40.0201051804580.1607-100000@blue1.dev.mcafeelabs.com>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 02:16:03 GMT
Message-ID: <fa.m0b4mqv.snsaop@ifi.uio.no>
References: <fa.o59ff8v.tlk6p4@ifi.uio.no>
Lines: 120

On Sun, 6 Jan 2002, Ingo Molnar wrote:

>
> On Sat, 5 Jan 2002, Davide Libenzi wrote:
>
> > Can you send me a link, there're different things to be fixed IMHO.
>
> my latest stuff is at:
>
>    http://redhat.com/~mingo/O(1)-scheduler/sched-O1-2.5.2-B1.patch
>
> > The load estimator can easily use the current dyn_prio/time_slice by
> > simplyfing things a _lot_
>
> i have experimented with a very high number of variants. I estimated sleep
> times, i estimated run times, i estimated runqueue times. Note that the
> current estimator measures time spent on the *runqueue*, not time spent on
> the CPU. This means that in an overload spike we have an automatically
> increasing penalization of tasks that want to run. While i'm not too
> emotional about eg. the RT bits, this part of the scheduler is pretty
> critical to handle high load smoothly.
>
> the integration effect of the estimator was written to be fast, and it's
> fast. Also note that in most of the time we do not even call the
> estimator:
>
>         if (p->run_timestamp == jiffies)
>                 goto enqueue;
>
> ie. in high frequency wakeup situations we'll call into the estimator only
> once every jiffy.

Like the current one ( pre8 )
You've a per-cpu swap array counter ( old recalc loop ) that you increment
each time you swap arrays.
Each task struct has its rcl_last that is updated when you inject the task
on the run queue :

        p->dyn_prio += rcl_curr(task_qid) - p->rcl_last;
        p->rcl_last = rcl_curr(task_qid);
        if (p->dyn_prio > MAX_DYNPRIO) p->dyn_prio = MAX_DYNPRIO;

Something like this, and you'll push the task on the given queue depending
on 'prio' ( _unsigned_ with the code above ).
Each time a task exaust its time slice you decrease prio.
It works great and it's way simpler.



>
> > I would suggest a lower number of queues, 32 is way more than necessary.
>
> the reason i used more queues is the 'penalizing' effect of the per-task
> load-average estimator. We want to have some priority room these CPU-bound
> tasks can escape into, without hurting some of the interactive jobs that
> might get a few penalties here and there but still dont reach the maximum
> where all the CPU hogs live. (this is p->prio == 63 right now.)
>
> also, i wanted to map all the 39 nice values straight into the priority
> space, just to be sure. Some people *might* rely on finegrained priorities
> still.
>
> there is one additional thing i wanted to do to reduce the effect of the
> 64 queues: instead of using a straight doubly-linked list a'la list_t, we
> can do a head-pointer that cuts the queue size into half, and reduces
> cache footprint of the scheduler data structures as well. But i did not
> want to add this until all bugs are fixed, this is an invariant
> cache-footprint optimization.

Ingo, i did a lot of testing by studying the dyn_prio distribution.
You've a lot of tasks ( i/o bound ) moving between the very firsts ( top )
queues.



> > The rt code _must_ be better, it can be easily done by a smartest
> > wakeup. There's no need to acquire the whole lock set, at least w/out
> > a checkpoint solution ( look at BMQS ) that prevents multiple failing
> > lookups inside the RT queue.
>
> regarding SCHED_OTHER, i have intentionally avoided smart wakeups, pushed
> the balancing logic more into the load balancer.
>
> load spikes and big statistical fluctuations of runqueue lengths we should
> not care much about - they are spikes we cannot flatten anyway, they can
> be gone before the task has finished flushing over its data set to the
> other CPU.
>
> regarding RT tasks, i did not want to add something that i know is broken,
> even if rt_lock() is arguably heavyweight. I've seen the 'IPI in flight
> misses the real picture' situation a number of times and if we want to do
> RT scheduling seriously and accurately on SMP then we should give a
> perfect solution to it. Would you like me to explain the 'IPI in flight'
> problem in detail, or do you agree that it's a problem?

What's the 'IPI in flight' a new counter terrorism measure ? :)
I use a chack point inside BMQS.
There's a global variable that is incremented each time an RT task is woke
up.
There's a local task struct checkpoint that is aligned to the global one
when a lookup inside the RT queue results empty :

    if (grt_chkp != rtt_chkp(cpu_number_map(this_cpu)) &&
        !list_empty(&runqueue_head(RT_QID)))
        goto rt_queue_select;

We've to work on the rt code and the balancing code/hooks




- Davide



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!sn-xit-02!supernews.com!
news.tele.dk!small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!
ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sat, 5 Jan 2002 18:17:06 -0800 (PST)
From: Davide Libenzi <davi...@xmailserver.org>
X-X-Sender: dav...@blue1.dev.mcafeelabs.com
To: Alan Cox <a...@lxorguk.ukuu.org.uk>
cc: Ingo Molnar <mi...@elte.hu>, Linus Torvalds <torva...@transmeta.com>,
        lkml <linux-ker...@vger.kernel.org>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <E16N2vX-00023B-00@the-village.bc.nu>
Original-Message-ID: <Pine.LNX.4.40.0201051816470.1607-100000@blue1.dev.mcafeelabs.com>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 02:16:50 GMT
Message-ID: <fa.lvr2n2v.s72agr@ifi.uio.no>
References: <fa.fej317v.14nal93@ifi.uio.no>
Lines: 23

On Sun, 6 Jan 2002, Alan Cox wrote:

> > > then it represents a huge win since an 8bit ffz can be done by lookup table
> > > and that is fast on all processors
> >
> > It's here that i want to go, but i'd liketo do it gradually :)
> > unsigned char first_bit[255];
>
> Make it [256] and you can do 9 queues since the idle task will always
> be queued...

Mistyping error :)



- Davide


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
logbridge.uoregon.edu!news.net.uni-c.dk!uninett.no!uio.no!
news-feed.ifi.uio.no!ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sat, 5 Jan 2002 18:23:30 -0800 (PST)
From: Davide Libenzi <davi...@xmailserver.org>
X-X-Sender: dav...@blue1.dev.mcafeelabs.com
To: Ingo Molnar <mi...@elte.hu>
cc: Alan Cox <a...@lxorguk.ukuu.org.uk>,
        Linus Torvalds <torva...@transmeta.com>,
        lkml <linux-ker...@vger.kernel.org>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.33.0201060501560.5193-100000@localhost.localdomain>
Original-Message-ID: <Pine.LNX.4.40.0201051821310.1607-100000@blue1.dev.mcafeelabs.com>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 02:20:33 GMT
Message-ID: <fa.lvamnav.vnea8s@ifi.uio.no>
References: <fa.o5p9fov.t5u597@ifi.uio.no>
Lines: 54

On Sun, 6 Jan 2002, Ingo Molnar wrote:

>
> On Sun, 6 Jan 2002, Alan Cox wrote:
>
> > 64 queues costs a tiny amount more than 32 queues. If you can get it
> > down to eight or nine queues with no actual cost (espcially for non
> > realtime queues) then it represents a huge win since an 8bit ffz can
> > be done by lookup table and that is fast on all processors
>
> i'm afraid that while 32 might work, 8 will definitely not be enough. In
> the interactivity-detection scheme i added it's important for interactive
> tasks to have some room (in terms of priority levels) to go up without
> hitting the levels of the true CPU abusers.
>
> we can do 32-bit ffz by doing 4x 8-bit ffz's though:
>
> 	if (likely(byte[0]))
> 		return ffz8[byte[0]];
> 	else if (byte[1])
> 		return ffz8[byte[1]];
> 	else if (byte[2]
> 		return ffz8[byte[2]];
> 	else if (byte[3]
> 		return ffz8[byte[3]];
> 	else
> 		return -1;
>
> and while this is still 4 branches, it's better than a loop of 32. But i
> also think that George Anzinger's idea works well too to reduce the cost
> of bitsearching. Or those platforms that decide to do so could search the
> arrray directly as well - if it's 32 queues then it's a cache footprint of
> 4 cachelines, which can be searched directly without any problem.

dyn_prio -> [0..15]

each time a task exaust its ts you decrease dyn_prio.

queue = dyn_prio >> 1

You get 16 consecutive CPU hog steps before falling in the hell of CPU
bound tasks




- Davide


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
logbridge.uoregon.edu!news.net.uni-c.dk!uninett.no!uio.no!
news-feed.ifi.uio.no!ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sun, 6 Jan 2002 05:16:23 +0100 (CET)
From: Ingo Molnar <mi...@elte.hu>
Reply-To: <mi...@elte.hu>
To: Linus Torvalds <torva...@transmeta.com>
Cc: Davide Libenzi <davi...@xmailserver.org>,
        lkml <linux-ker...@vger.kernel.org>,
        Alan Cox <a...@lxorguk.ukuu.org.uk>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.33.0201060508110.5193-100000@localhost.localdomain>
Original-Message-ID: <Pine.LNX.4.33.0201060516090.6357-100000@localhost.localdomain>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 02:21:10 GMT
Message-ID: <fa.o3pjf0v.u54614@ifi.uio.no>
References: <fa.o3ovfov.v5g59e@ifi.uio.no>
Lines: 12


against -pre9:

    http://redhat.com/~mingo/O(1)-scheduler/sched-O1-2.5.2-B4.patch

	Ingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
logbridge.uoregon.edu!news.net.uni-c.dk!uninett.no!uio.no!
news-feed.ifi.uio.no!ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
To: mi...@elte.hu
Original-Date: 	Sun, 6 Jan 2002 02:30:54 +0000 (GMT)
Cc: a...@lxorguk.ukuu.org.uk (Alan Cox),
        davi...@xmailserver.org (Davide Libenzi),
        torva...@transmeta.com (Linus Torvalds),
        linux-ker...@vger.kernel.org (lkml)
In-Reply-To: 
<Pine.LNX.4.33.0201060501560.5193-100000@localhost.localdomain> 
from "Ingo Molnar" at Jan 06, 2002 05:07:47 AM
X-Mailer: ELM [version 2.5 PL6]
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Original-Message-Id: <E16N35L-00024p-00@the-village.bc.nu>
From: Alan Cox <a...@lxorguk.ukuu.org.uk>
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 02:24:50 GMT
Message-ID: <fa.ffhsu7v.17g8g93@ifi.uio.no>
References: <fa.o5p9fov.t5u597@ifi.uio.no>
Lines: 14

> we can do 32-bit ffz by doing 4x 8-bit ffz's though:

There are better algorithms than the branching one already. You can
do it a 32bit one with a multiply shift and 6 bit lookup if your multiply
is ok, or for non superscalar processors using shift and adds. 

64bit is 32bit ffz(x.low|x.high) and a single bit test

I can dig out the 32bit one if need be (its from a NetBSD mailing list)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!logbridge.uoregon.edu!
news.net.uni-c.dk!uninett.no!uio.no!news-feed.ifi.uio.no!ifi.uio.no!
internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sun, 6 Jan 2002 05:19:11 +0100 (CET)
From: Ingo Molnar <mi...@elte.hu>
Reply-To: <mi...@elte.hu>
To: Alan Cox <a...@lxorguk.ukuu.org.uk>
Cc: Davide Libenzi <davi...@xmailserver.org>,
        Linus Torvalds <torva...@transmeta.com>,
        lkml <linux-ker...@vger.kernel.org>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <E16N35L-00024p-00@the-village.bc.nu>
Original-Message-ID: <Pine.LNX.4.33.0201060517480.6501-100000@localhost.localdomain>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 02:23:10 GMT
Message-ID: <fa.o5pldov.s5a793@ifi.uio.no>
References: <fa.ffhsu7v.17g8g93@ifi.uio.no>
Lines: 19


On Sun, 6 Jan 2002, Alan Cox wrote:

> There are better algorithms than the branching one already. You can do
> it a 32bit one with a multiply shift and 6 bit lookup if your multiply
> is ok, or for non superscalar processors using shift and adds.
>
> 64bit is 32bit ffz(x.low|x.high) and a single bit test

ok - i wasnt thinking straight. as few branches as possible should be the
way to go, no BTB will help such functions so branches must be reduced.

	Ingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!sn-xit-02!supernews.com!
news.tele.dk!small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!
ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Message-ID: <3C37CA9F.265406B3@easynet.be>
Original-Date: 	Sun, 06 Jan 2002 04:55:11 +0100
From: Luc Van Oostenryck <luc.vanoostenr...@easynet.be>
X-Mailer: Mozilla 4.79 [en] (X11; U; Linux 2.5.2-pre9-O1 i686)
X-Accept-Language: en
MIME-Version: 1.0
To: linux-ker...@vger.kernel.org
CC: Ingo Molnar <mi...@elte.hu>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
Original-References: <Pine.LNX.4.33.0201060516090.6357-100...@localhost.localdomain>
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Sun, 6 Jan 2002 03:57:42 GMT
Message-ID: <fa.dm3nbsv.clgohl@ifi.uio.no>
References: <fa.o3pjf0v.u54614@ifi.uio.no>
Lines: 20

Ingo Molnar wrote:
> 
> against -pre9:
> 
>     http://redhat.com/~mingo/O(1)-scheduler/sched-O1-2.5.2-B4.patch
> 
>         Ingo

Ingo,

I am running 2.5.2-pre9 with your -B4 patch since more or less 1 hour.
I have done a little stress testing, seems OK: no crash, no freeze.

-- 
Luc Van Oostenryck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news.tele.dk!small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!
ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sun, 6 Jan 2002 19:08:01 -0800
From: Richard Henderson <r...@twiddle.net>
To: Alan Cox <a...@lxorguk.ukuu.org.uk>
Cc: Davide Libenzi <davi...@xmailserver.org>, Ingo Molnar <mi...@elte.hu>,
        Linus Torvalds <torva...@transmeta.com>,
        lkml <linux-ker...@vger.kernel.org>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
Original-Message-ID: <20020106190801.A27356@twiddle.net>
Mail-Followup-To: Alan Cox <a...@lxorguk.ukuu.org.uk>,
	Davide Libenzi <davi...@xmailserver.org>,
	Ingo Molnar <mi...@elte.hu>,
	Linus Torvalds <torva...@transmeta.com>,
	lkml <linux-ker...@vger.kernel.org>
Original-References: 
<Pine.LNX.4.40.0201051753090.1607-100...@blue1.dev.mcafeelabs.com> 
<E16N2oW-00021c...@the-village.bc.nu>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
User-Agent: Mutt/1.2.5i
In-Reply-To: <E16N2oW-00021c-00@the-village.bc.nu>; 
from alan@lxorguk.ukuu.org.uk on Sun, Jan 06, 2002 at 02:13:32AM +0000
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Mon, 7 Jan 2002 03:10:26 GMT
Message-ID: <fa.fi4t9fv.mn671n@ifi.uio.no>
References: <fa.fdkn0vv.15kqmh3@ifi.uio.no>
Lines: 14

On Sun, Jan 06, 2002 at 02:13:32AM +0000, Alan Cox wrote:
> ... since an 8bit ffz can be done by lookup table
> and that is fast on all processors

Please still provide the arch hook -- single cycle ffs type
instructions are still faster than any memory access.


r~
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news.tele.dk!small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!
ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
X-Authentication-Warning: penguin.transmeta.com: torvalds owned process doing -bs
Original-Date: 	Sun, 6 Jan 2002 19:16:31 -0800 (PST)
From: Linus Torvalds <torva...@transmeta.com>
To: Richard Henderson <r...@twiddle.net>
cc: Alan Cox <a...@lxorguk.ukuu.org.uk>,
        Davide Libenzi <davi...@xmailserver.org>, Ingo Molnar <mi...@elte.hu>,
        lkml <linux-ker...@vger.kernel.org>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <20020106190801.A27356@twiddle.net>
Original-Message-ID: <Pine.LNX.4.33.0201061908330.5819-100000@penguin.transmeta.com>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Mon, 7 Jan 2002 03:20:02 GMT
Message-ID: <fa.oca9c7v.ike5r8@ifi.uio.no>
References: <fa.fi4t9fv.mn671n@ifi.uio.no>
Lines: 24


On Sun, 6 Jan 2002, Richard Henderson wrote:
> On Sun, Jan 06, 2002 at 02:13:32AM +0000, Alan Cox wrote:
> > ... since an 8bit ffz can be done by lookup table
> > and that is fast on all processors
>
> Please still provide the arch hook -- single cycle ffs type
> instructions are still faster than any memory access.

This is probably true even on x86, except in benchmarks (the x86 ffs
instruction definitely doesn't historically count as "fast", and a table
lookup will probably win in a benchmark where the table is hot in the
cache, but you don't have to miss very often to be ok with a few CPU
cycles..)

(bsfl used to be very slow. That's not as true any more)

		Linus

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news.tele.dk!small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!
ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sun, 6 Jan 2002 19:31:16 -0800 (PST)
From: Davide Libenzi <davi...@xmailserver.org>
X-X-Sender: dav...@blue1.dev.mcafeelabs.com
To: Linus Torvalds <torva...@transmeta.com>
cc: Richard Henderson <r...@twiddle.net>, Alan Cox <a...@lxorguk.ukuu.org.uk>,
        Ingo Molnar <mi...@elte.hu>, lkml <linux-ker...@vger.kernel.org>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.33.0201061908330.5819-100000@penguin.transmeta.com>
Original-Message-ID: <Pine.LNX.4.40.0201061927490.1000-100000@blue1.dev.mcafeelabs.com>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Mon, 7 Jan 2002 03:27:45 GMT
Message-ID: <fa.lvqsnav.s7ga8u@ifi.uio.no>
References: <fa.oca9c7v.ike5r8@ifi.uio.no>
Lines: 33

On Sun, 6 Jan 2002, Linus Torvalds wrote:

>
> On Sun, 6 Jan 2002, Richard Henderson wrote:
> > On Sun, Jan 06, 2002 at 02:13:32AM +0000, Alan Cox wrote:
> > > ... since an 8bit ffz can be done by lookup table
> > > and that is fast on all processors
> >
> > Please still provide the arch hook -- single cycle ffs type
> > instructions are still faster than any memory access.
>
> This is probably true even on x86, except in benchmarks (the x86 ffs
> instruction definitely doesn't historically count as "fast", and a table
> lookup will probably win in a benchmark where the table is hot in the
> cache, but you don't have to miss very often to be ok with a few CPU
> cycles..)
>
> (bsfl used to be very slow. That's not as true any more)

32 bit words lookup can be easily done in few clock cycles in most cpus
by using tuned assembly code.




- Davide


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news.tele.dk!small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!
ifi.uio.no!internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
X-Authentication-Warning: penguin.transmeta.com: torvalds owned process doing -bs
Original-Date: 	Sun, 6 Jan 2002 19:34:36 -0800 (PST)
From: Linus Torvalds <torva...@transmeta.com>
To: Davide Libenzi <davi...@xmailserver.org>
cc: Richard Henderson <r...@twiddle.net>, Alan Cox <a...@lxorguk.ukuu.org.uk>,
        Ingo Molnar <mi...@elte.hu>, lkml <linux-ker...@vger.kernel.org>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.40.0201061927490.1000-100000@blue1.dev.mcafeelabs.com>
Original-Message-ID: <Pine.LNX.4.33.0201061930250.5900-100000@penguin.transmeta.com>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Mon, 7 Jan 2002 03:36:48 GMT
Message-ID: <fa.obqfcmv.i405b9@ifi.uio.no>
References: <fa.lvqsnav.s7ga8u@ifi.uio.no>
Lines: 20


On Sun, 6 Jan 2002, Davide Libenzi wrote:
>
> 32 bit words lookup can be easily done in few clock cycles in most cpus
> by using tuned assembly code.

I tried to time "bsfl", it showed up as one cycle more than "nothing" on
my PII.

It used to be something like 7+n cycles on a i386, if I remember
correctly. It's just not an issue any more - trying to use clever code to
avoid it is just silly.

		Linus

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!
news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!news.tele.dk!
small.news.tele.dk!129.240.148.23!uio.no!nntp.uio.no!ifi.uio.no!
internet-mailinglist
Newsgroups: fa.linux.kernel
Return-Path: <linux-kernel-ow...@vger.kernel.org>
Original-Date: 	Sun, 6 Jan 2002 19:49:09 -0800 (PST)
From: Davide Libenzi <davi...@xmailserver.org>
X-X-Sender: dav...@blue1.dev.mcafeelabs.com
To: Linus Torvalds <torva...@transmeta.com>
cc: Richard Henderson <r...@twiddle.net>, Alan Cox <a...@lxorguk.ukuu.org.uk>,
        Ingo Molnar <mi...@elte.hu>, lkml <linux-ker...@vger.kernel.org>
Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler
In-Reply-To: <Pine.LNX.4.33.0201061930250.5900-100000@penguin.transmeta.com>
Original-Message-ID: <Pine.LNX.4.40.0201061948390.1000-100000@blue1.dev.mcafeelabs.com>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: linux-kernel-ow...@vger.kernel.org
Precedence: bulk
X-Mailing-List: 	linux-kernel@vger.kernel.org
Organization: Internet mailing list
Date: Mon, 7 Jan 2002 03:47:05 GMT
Message-ID: <fa.lvasnqv.vngboh@ifi.uio.no>
References: <fa.obqfcmv.i405b9@ifi.uio.no>
Lines: 27

On Sun, 6 Jan 2002, Linus Torvalds wrote:

>
> On Sun, 6 Jan 2002, Davide Libenzi wrote:
> >
> > 32 bit words lookup can be easily done in few clock cycles in most cpus
> > by using tuned assembly code.
>
> I tried to time "bsfl", it showed up as one cycle more than "nothing" on
> my PII.
>
> It used to be something like 7+n cycles on a i386, if I remember
> correctly. It's just not an issue any more - trying to use clever code to
> avoid it is just silly.

I think the issue was about architectures that does not have bsfl like ops



- Davide


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/