* Step 1 of cleaning up io_apic.c removes local cpumask_t variables
from the stack.
- Method 1: remove unnecessary "extra" cpumask variables.
- Method 2: use for_each_online_cpu_mask_nr() to logically AND
the passed in mask with cpu_online_map, eliminating
the need for a temp cpumask variable.
- Method 3: use get_cpumask_var variables where possible. The current
assignment of temp variables is:
* Temporary cpumask variables
* (XXX - would be _MUCH_ better as a "stack" of temp cpumasks.)
* level 4:
* level 3:
* level 2:
* level 1:
* Addition of temp cpumask variables for the "target" of TARGET_CPUS
is in preparation of changing the TARGET_CPUS for x86_64. I've
kept those changes here to document which routines get which temp
* Total stack size savings are in the last step.
Applies to linux-2.6.tip/master.
Signed-off-by: Mike Travis <firstname.lastname@example.org>
arch/x86/kernel/io_apic.c | 268 ++++++++++++++++++++++++++++++----------------
1 file changed, 175 insertions(+), 93 deletions(-)
@@ -41,6 +41,7 @@
@@ -93,6 +94,39 @@ int mp_bus_id_to_type[MAX_MP_BUSSES];
Sorry that patch seems incredibly messy. Global variables
and a tricky ordering and while it's at least commented it's still a mess
and maintenance unfriendly.
Also I think set_affinity is the only case where a truly arbitary cpu
mask can be passed in anyways. And it's passed in from elsewhere.
The other cases generally just want to handle a subset of CPUs which
are nearby. How about you define a new cpumask like type that
consists of a start/stop CPU and a mask for that range only
and is not larger than a few words?
I think with that the nearby assignments could be handled
reasonably cleanly with arguments and local variables.
And I suspect with some restructuring set_affinity could
be also made to support such a model.
Thanks for the comments. I did mull over something like this early on
in researching this "cpumask" problem, but the problem of having different
cpumask operators didn't seem worthwhile. But perhaps for a very limited
use (with very few ops), it would be worthwhile.
But how big to make these? Variable sized? Config option? Should I
introduce some kind of MAX_CPUS_PER_NODE constant? (NR_CPUS/MAX_NUMNODES
I don't think is the right answer.)