This simple patch should solve the above problem. Seeing the config, its
seems to be a 32bit box.
It convert tlbstate_lock spin_lock to raw_spin_lock.
Preemption has already been disabled, hence this change shouldn't
affect latency numbers.
Signed-Off-By: Chirag <email@example.com>
diff --git a/arch/x86/kernel/tlb_32.c b/arch/x86/kernel/tlb_32.c
index 9bb2363..228849c 100644
@@ -23,7 +23,7 @@ DEFINE_PER_CPU(struct tlb_state, cpu_tlbstate)
static cpumask_t flush_cpumask;
static struct mm_struct *flush_mm;
static unsigned long flush_va;
* We cannot call mmdrop() because we are in interrupt context,
Kindly understand what xavier is trying to do. He is Trying to disable smp.
The RT - Linux Code is dependent on the mutex lock to perform some
soft/hard real time processing. This is an problem with the mutex lock.
Kindly check with some one before signing off patches.
[ Refrain from using "unlisted-recipients" please! ]
The patch is legit.
The problem is simply that functions like flush_tlb_current_task and
flush_tlb_mm disable preemption and then call flush_tlb_others.
This function locks the tlbstate_lock which is currently a rtmutex. An
rtmutex can not be called with preemption disabled. The tlbstate_lock is
static, small and confined. I'm not sure we can change this code (flushing
the TLB) in a way where we can allow preemption. This may just be a
latency that we must hit.