read atomic variable
Parameters
Description
Atomically reads the value of v.
set atomic variable
Parameters
Description
Atomically sets the value of v to i.
add integer to atomic variable
Parameters
Description
Atomically adds i to v.
subtract integer from atomic variable
Parameters
Description
Atomically subtracts i from v.
subtract value from variable and test result
Parameters
Description
Atomically subtracts i from v and returns true if the result is zero, or false for all other cases.
increment atomic variable
Parameters
Description
Atomically increments v by 1.
decrement atomic variable
Parameters
Description
Atomically decrements v by 1.
decrement and test
Parameters
Description
Atomically decrements v by 1 and returns true if the result is 0, or false for all other cases.
increment and test
Parameters
Description
Atomically increments v by 1 and returns true if the result is zero, or false for all other cases.
add and test if negative
Parameters
Description
Atomically adds i to v and returns true if the result is negative, or false when result is greater than or equal to zero.
add integer and return
Parameters
Description
Atomically adds i to v and returns i + v
subtract integer and return
Parameters
Description
Atomically subtracts i from v and returns v - i
add unless the number is already a given value
Parameters
Description
Atomically adds a to v, so long as v was not already u. Returns the old value of v.
increment of a short integer
Parameters
Description
Atomically adds 1 to v Returns the new value of u
snaphsot of system and user cputime
Definition
struct prev_cputime {
#ifndef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
u64 utime;
u64 stime;
raw_spinlock_t lock;
#endif
};
Members
Description
Stores previous user/system time values such that we can guarantee monotonicity.
collected CPU time counts
Definition
struct task_cputime {
u64 utime;
u64 stime;
unsigned long long sum_exec_runtime;
};
Members
Description
This structure groups together three kinds of CPU time that are tracked for threads and thread groups. Most things considering CPU time want to group these counts together and treat all three of them in parallel.
check that a task structure is not stale
Parameters
Description
Test if a process is not yet dead (at most zombie state) If pid_alive fails, then pointers within the task structure can be stale and must not be dereferenced.
Return
1 if the process is alive. 0 otherwise.
check if a task structure is init. Since init is free to have sub-threads we need to check tgid.
Parameters
Description
Check if a task structure is the first user space task the kernel created.
Return
1 if the task structure is init. 0 otherwise.
return the nice value of a given task.
Parameters
Return
The nice value [ -20 ... 0 ... 19 ].
is the specified task an idle task?
Parameters
Return
1 if p is an idle task. 0 otherwise.
Wake up a specific process
Parameters
Description
Attempt to wake up the nominated process and move it to the set of runnable processes.
Return
1 if the process was woken up, 0 if it was already running.
It may be assumed that this function implies a write memory barrier before changing the task state if and only if any tasks are woken up.
tell me when current is being preempted & rescheduled
Parameters
no longer interested in preemption notifications
Parameters
Description
This is not safe to call from within a preemption notifier.
preempt_schedule called by tracing
Parameters
Description
The tracing infrastructure uses preempt_enable_notrace to prevent recursion and tracing preempt enabling caused by the tracing infrastructure itself. But as tracing can happen in areas coming from userspace or just about to enter userspace, a preempt enable can occur before user_exit() is called. This will cause the scheduler to be called when the system is still in usermode.
To prevent this, the preempt_enable_notrace will use this function instead of preempt_schedule() to exit user context if needed before calling the scheduler.
change the scheduling policy and/or RT priority of a thread.
Parameters
Return
0 on success. An error code otherwise.
NOTE that the task may be already dead.
change the scheduling policy and/or RT priority of a thread from kernelspace.
Parameters
Description
Just like sched_setscheduler, only don’t bother checking if the current context has permission. For example, this is needed in stop_machine(): we create temporary high priority worker threads, but our caller might not have that capability.
Return
0 on success. An error code otherwise.
yield the current processor to other threads.
Parameters
Description
Do not ever use this function, there’s a 99% chance you’re doing it wrong.
The scheduler is at all times free to pick the calling task as the most eligible task to run, if removing the yield() call from your code breaks it, its already broken.
Typical broken usage is:
where one assumes that yield() will let ‘the other’ process run that will make event true. If the current task is a SCHED_FIFO task that will never happen. Never use yield() as a progress guarantee!!
If you want to use yield() to wait for something, use wait_event(). If you want to use yield() to be ‘nice’ for others, use cond_resched(). If you still want to use yield(), do not!
yield the current processor to another thread in your thread group, or accelerate that thread toward the processor it’s on.
Parameters
Description
It’s the caller’s job to ensure that the target task struct can’t go away on us before we can do any checks.
Return
true (>0) if we indeed boosted the target task. false (0) if we failed to boost the target. -ESRCH if there’s no task to yield to.
find the best (lowest-pri) CPU in the system
Parameters
Note
This function returns the recommended CPUs as calculated during the current invocation. By the time the call returns, the CPUs may have in fact changed priorities any number of times. While not ideal, it is not an issue of correctness since the normal rebalancer logic will correct any discrepancies created by racing against the uncertainty of the current priority configuration.
Return
(int)bool - CPUs were found
update the cpu priority setting
Parameters
Note
Assumes cpu_rq(cpu)->lock is locked
Return
(void)
initialize the cpupri structure
Parameters
Return
-ENOMEM on memory allocation failure.
clean up the cpupri structure
Parameters
update the tg’s load avg
Parameters
Description
This function ‘ensures’: tg->load_avg := Sum tg->cfs_rq[]->avg.load. However, because tg->load_avg is a global value there are performance considerations.
In order to avoid having to look at the other cfs_rq’s, we use a differential update where we store the last value we propagated. This in turn allows skipping updates if the differential is ‘small’.
Updating tg’s load_avg is necessary before update_cfs_share() (which is done) and effective_load() (which is not done because it is too costly).
update the cfs_rq’s load/util averages
Parameters
Description
The cfs_rq avg is the direct sum of all its entities (blocked and runnable) avg. The immediate corollary is that all (fair) tasks must be attached, see post_init_entity_util_avg().
cfs_rq->avg is used for task_h_load() and update_cfs_share() for example.
Returns true if the load decayed or we removed load.
Since both these conditions indicate a changed cfs_rq->avg.load we should call update_tg_load_avg() when this function returns true.
attach this entity to its cfs_rq load avg
Parameters
Description
Must call update_cfs_rq_load_avg() before this, since we rely on cfs_rq->avg.last_update_time being current.
detach this entity from its cfs_rq load avg
Parameters
Description
Must call update_cfs_rq_load_avg() before this, since we rely on cfs_rq->avg.last_update_time being current.
update the rq->cpu_load[] statistics
Parameters
Description
Update rq->cpu_load[] statistics. This function is usually called every scheduler tick (TICK_NSEC).
This function computes a decaying average:
load[i]’ = (1 - 1/2^i) * load[i] + (1/2^i) * load
Because of NOHZ it might not get called on every tick which gives need for the pending_updates argument.
- load[i]_n = (1 - 1/2^i) * load[i]_n-1 + (1/2^i) * load_n-1
- = A * load[i]_n-1 + B ; A := (1 - 1/2^i), B := (1/2^i) * load = A * (A * load[i]_n-2 + B) + B = A * (A * (A * load[i]_n-3 + B) + B) + B = A^3 * load[i]_n-3 + (A^2 + A + 1) * B = A^n * load[i]_0 + (A^(n-1) + A^(n-2) + ... + 1) * B = A^n * load[i]_0 + ((1 - A^n) / (1 - A)) * B = (1 - 1/2^i)^n * (load[i]_0 - load) + load
In the above we’ve assumed load_n := load, which is true for NOHZ_FULL as any change in load would have resulted in the tick being turned back on.
For regular NOHZ, this reduces to:
load[i]_n = (1 - 1/2^i)^n * load[i]_0
see decay_load_misses(). For NOHZ_FULL we get to subtract and add the extra term.
Obtain the load index for a given sched domain.
Parameters
Return
The load index.
Update sched_group’s statistics for load balancing.
Parameters
return 1 on busiest group
Parameters
Description
Determine if sg is a busier group than the previously selected busiest group.
Return
true if sg is a busier group than the previously selected busiest group. false otherwise.
Update sched_domain’s statistics for load balancing.
Parameters
Check to see if the group is packed into the sched doman.
Parameters
Description
This is primarily intended to used at the sibling level. Some cores like POWER7 prefer to use lower numbered SMT threads. In the case of POWER7, it can move to lower SMT modes only when higher threads are idle. When in lower SMT modes, the threads will perform better since they share less core resources. Hence when we have idle threads, we want them to be the higher ones.
This packing function is run on idle threads. It checks to see if the busiest CPU in this domain (core in the P7 case) has a higher CPU number than the packing function is being run on. Here we are assuming lower CPU number will be equivalent to lower a SMT thread number.
Return
1 when packing is required and a task should be moved to this CPU. The amount of the imbalance is returned in *imbalance.
Calculate the minor imbalance that exists amongst the groups of a sched_domain, during load balancing.
Parameters
Calculate the amount of imbalance present within the groups of a given sched_domain during load balance.
Parameters
Returns the busiest group within the sched_domain if there is an imbalance.
Parameters
Description
Also calculates the amount of weighted load which should be moved to restore balance.
Return
declare and initialize a completion structure
Parameters
Description
This macro declares and initializes a completion structure. Generally used for static declarations. You should use the _ONSTACK variant for automatic variables.
declare and initialize a completion structure
Parameters
Description
This macro declares and initializes a completion structure on the kernel stack.
Initialize a dynamically allocated completion
Parameters
Description
This inline function will initialize a dynamically created completion structure.
reinitialize a completion structure
Parameters
Description
This inline function should be used to reinitialize a completion structure so it can be reused. This is especially important after complete_all() is used.
function to round jiffies to a full second
Parameters
Description
__round_jiffies() rounds an absolute time in the future (in jiffies) up or down to (approximately) full seconds. This is useful for timers for which the exact time they fire does not matter too much, as long as they fire approximately every X seconds.
By rounding these timers to whole seconds, all such timers will fire at the same time, rather than at various times spread out. The goal of this is to have the CPU wake up less, which saves power.
The exact rounding is skewed for each processor to avoid all processors firing at the exact same time, which could lead to lock contention or spurious cache line bouncing.
The return value is the rounded version of the j parameter.
function to round jiffies to a full second
Parameters
Description
__round_jiffies_relative() rounds a time delta in the future (in jiffies) up or down to (approximately) full seconds. This is useful for timers for which the exact time they fire does not matter too much, as long as they fire approximately every X seconds.
By rounding these timers to whole seconds, all such timers will fire at the same time, rather than at various times spread out. The goal of this is to have the CPU wake up less, which saves power.
The exact rounding is skewed for each processor to avoid all processors firing at the exact same time, which could lead to lock contention or spurious cache line bouncing.
The return value is the rounded version of the j parameter.
function to round jiffies to a full second
Parameters
Description
round_jiffies() rounds an absolute time in the future (in jiffies) up or down to (approximately) full seconds. This is useful for timers for which the exact time they fire does not matter too much, as long as they fire approximately every X seconds.
By rounding these timers to whole seconds, all such timers will fire at the same time, rather than at various times spread out. The goal of this is to have the CPU wake up less, which saves power.
The return value is the rounded version of the j parameter.
function to round jiffies to a full second
Parameters
Description
round_jiffies_relative() rounds a time delta in the future (in jiffies) up or down to (approximately) full seconds. This is useful for timers for which the exact time they fire does not matter too much, as long as they fire approximately every X seconds.
By rounding these timers to whole seconds, all such timers will fire at the same time, rather than at various times spread out. The goal of this is to have the CPU wake up less, which saves power.
The return value is the rounded version of the j parameter.
function to round jiffies up to a full second
Parameters
Description
This is the same as __round_jiffies() except that it will never round down. This is useful for timeouts for which the exact time of firing does not matter too much, as long as they don’t fire too early.
function to round jiffies up to a full second
Parameters
Description
This is the same as __round_jiffies_relative() except that it will never round down. This is useful for timeouts for which the exact time of firing does not matter too much, as long as they don’t fire too early.
function to round jiffies up to a full second
Parameters
Description
This is the same as round_jiffies() except that it will never round down. This is useful for timeouts for which the exact time of firing does not matter too much, as long as they don’t fire too early.
function to round jiffies up to a full second
Parameters
Description
This is the same as round_jiffies_relative() except that it will never round down. This is useful for timeouts for which the exact time of firing does not matter too much, as long as they don’t fire too early.
initialize a timer
Parameters
Description
init_timer_key() must be done to a timer prior calling any of the other timer functions.
modify a pending timer’s timeout
Parameters
Description
mod_timer_pending() is the same for pending timers as mod_timer(), but will not re-activate and modify already deleted timers.
It is useful for unserialized use of timers.
modify a timer’s timeout
Parameters
Description
mod_timer() is a more efficient way to update the expire field of an active timer (if the timer is inactive it will be activated)
mod_timer(timer, expires) is equivalent to:
del_timer(timer); timer->expires = expires; add_timer(timer);
Note that if there are multiple unserialized concurrent users of the same timer, then mod_timer() is the only safe way to modify the timeout, since add_timer() cannot modify an already running timer.
The function returns whether it has modified a pending timer or not. (ie. mod_timer() of an inactive timer returns 0, mod_timer() of an active timer returns 1.)
start a timer
Parameters
Description
The kernel will do a ->function(->data) callback from the timer interrupt at the ->expires point in the future. The current time is ‘jiffies’.
The timer’s ->expires, ->function (and if the handler uses it, ->data) fields must be set prior calling this function.
Timers with an ->expires field in the past will be executed in the next timer tick.
start a timer on a particular CPU
Parameters
Description
This is not very scalable on SMP. Double adds are not possible.
deactive a timer.
Parameters
Description
del_timer() deactivates a timer - this works on both active and inactive timers.
The function returns whether it has deactivated a pending timer or not. (ie. del_timer() of an inactive timer returns 0, del_timer() of an active timer returns 1.)
Try to deactivate a timer
Parameters
Description
This function tries to deactivate a timer. Upon successful (ret >= 0) exit the timer is not queued and the handler is not running on any CPU.
deactivate a timer and wait for the handler to finish.
Parameters
Description
This function only differs from del_timer() on SMP: besides deactivating the timer it also makes sure the handler has finished executing on other CPUs.
Synchronization rules: Callers must prevent restarting of the timer, otherwise this function is meaningless. It must not be called from interrupt contexts unless the timer is an irqsafe one. The caller must not hold locks which would prevent completion of the timer’s handler. The timer’s handler must not call add_timer_on(). Upon exit the timer is not queued and the handler is not running on any CPU.
Note
interrupt context while calling this function. Even if the lock has nothing to do with the timer in question. Here’s why:
CPU0 CPU1 —- —-
<SOFTIRQ> call_timer_fn();
base->running_timer = mytimer;
Now del_timer_sync() will never return and never release somelock. The interrupt on the other CPU is waiting to grab somelock but it has interrupted the softirq that CPU0 is waiting to finish.
The function returns whether it has deactivated a pending timer or not.
sleep until timeout
Parameters
Description
Make the current task sleep until timeout jiffies have elapsed. The routine will return immediately unless the current task state has been set (see set_current_state()).
You can set the task state as follows -
TASK_UNINTERRUPTIBLE - at least timeout jiffies are guaranteed to pass before the routine returns unless the current task is explicitly woken up, (e.g. by wake_up_process())”.
TASK_INTERRUPTIBLE - the routine may return early if a signal is delivered to the current task or the current task is explicitly woken up.
The current task state is guaranteed to be TASK_RUNNING when this routine returns.
Specifying a timeout value of MAX_SCHEDULE_TIMEOUT will schedule the CPU away without a bound on the timeout. In this case the return value will be MAX_SCHEDULE_TIMEOUT.
Returns 0 when the timer has expired otherwise the remaining time in jiffies will be returned. In all cases the return value is guaranteed to be non-negative.
sleep safely even with waitqueue interruptions
Parameters
sleep waiting for signals
Parameters
Sleep for an approximate time
Parameters
Description
In non-atomic context where the exact wakeup time is flexible, use usleep_range() instead of udelay(). The sleep improves responsiveness by avoiding the CPU-hogging busy-wait of udelay(), and the range reduces power usage by allowing hrtimers to take advantage of an already- scheduled interrupt instead of scheduling a new one just for this sleep.
Parameters
Description
returns true if the wait list is not empty
NOTE
this function is lockless and requires care, incorrect usage _will_ lead to sporadic and non-obvious failure.
Use either while holding wait_queue_head_t::lock or when used for wakeups with an extra smp_mb() like:
CPU0 - waker CPU1 - waiter
for (;;) {cond = true; prepare_to_wait(wq, wait, state); smp_mb(); // smp_mb() from set_current_state() if (waitqueue_active(wq)) if (cond)
- wake_up(wq); break;
schedule();} finish_wait(wq, wait);
Because without the explicit smp_mb() it’s possible for the waitqueue_active() load to get hoisted over the cond store such that we’ll observe an empty wait list while the waiter might not observe cond.
Also note that this ‘optimization’ trades a spin_lock() for an smp_mb(), which (when the lock is uncontended) are of roughly equal cost.
check if there are any waiting processes
Parameters
Description
Returns true if wq has waiting processes
Please refer to the comment for waitqueue_active.
sleep until a condition gets true
Parameters
Description
The process is put to sleep (TASK_UNINTERRUPTIBLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
sleep (or freeze) until a condition gets true
Parameters
Description
The process is put to sleep (TASK_INTERRUPTIBLE – so as not to contribute to system load) until the condition evaluates to true. The condition is checked each time the waitqueue wq is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
sleep until a condition gets true or a timeout elapses
Parameters
Description
The process is put to sleep (TASK_UNINTERRUPTIBLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
Return
0 if the condition evaluated to false after the timeout elapsed, 1 if the condition evaluated to true after the timeout elapsed, or the remaining jiffies (at least 1) if the condition evaluated to true before the timeout elapsed.
sleep until a condition gets true
Parameters
Description
The process is put to sleep (TASK_UNINTERRUPTIBLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
sleep until a condition gets true
Parameters
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
The function will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.
sleep until a condition gets true or a timeout elapses
Parameters
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
Return
0 if the condition evaluated to false after the timeout elapsed, 1 if the condition evaluated to true after the timeout elapsed, the remaining jiffies (at least 1) if the condition evaluated to true before the timeout elapsed, or -ERESTARTSYS if it was interrupted by a signal.
sleep until a condition gets true or a timeout elapses
Parameters
Description
The process is put to sleep (TASK_UNINTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
The function returns 0 if condition became true, or -ETIME if the timeout elapsed.
sleep until a condition gets true or a timeout elapses
Parameters
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
The function returns 0 if condition became true, -ERESTARTSYS if it was interrupted by a signal, or -ETIME if the timeout elapsed.
sleep until a condition gets true
Parameters
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.
It must be called with wq.lock being held. This spinlock is unlocked while sleeping but condition testing is done while lock is held and when this macro exits the lock is held.
The lock is locked/unlocked using spin_lock()/spin_unlock() functions which must match the way they are locked/unlocked outside of this macro.
wake_up_locked() has to be called after changing any variable that could change the result of the wait condition.
The function will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.
sleep until a condition gets true
Parameters
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.
It must be called with wq.lock being held. This spinlock is unlocked while sleeping but condition testing is done while lock is held and when this macro exits the lock is held.
The lock is locked/unlocked using spin_lock_irq()/spin_unlock_irq() functions which must match the way they are locked/unlocked outside of this macro.
wake_up_locked() has to be called after changing any variable that could change the result of the wait condition.
The function will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.
sleep exclusively until a condition gets true
Parameters
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.
It must be called with wq.lock being held. This spinlock is unlocked while sleeping but condition testing is done while lock is held and when this macro exits the lock is held.
The lock is locked/unlocked using spin_lock()/spin_unlock() functions which must match the way they are locked/unlocked outside of this macro.
The process is put on the wait queue with an WQ_FLAG_EXCLUSIVE flag set thus when other process waits process on the list if this process is awaken further processes are not considered.
wake_up_locked() has to be called after changing any variable that could change the result of the wait condition.
The function will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.
sleep until a condition gets true
Parameters
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.
It must be called with wq.lock being held. This spinlock is unlocked while sleeping but condition testing is done while lock is held and when this macro exits the lock is held.
The lock is locked/unlocked using spin_lock_irq()/spin_unlock_irq() functions which must match the way they are locked/unlocked outside of this macro.
The process is put on the wait queue with an WQ_FLAG_EXCLUSIVE flag set thus when other process waits process on the list if this process is awaken further processes are not considered.
wake_up_locked() has to be called after changing any variable that could change the result of the wait condition.
The function will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.
sleep until a condition gets true
Parameters
Description
The process is put to sleep (TASK_KILLABLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
The function will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.
sleep until a condition gets true. The condition is checked under the lock. This is expected to be called with the lock taken.
Parameters
Description
The process is put to sleep (TASK_UNINTERRUPTIBLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
This is supposed to be called while holding the lock. The lock is dropped before invoking the cmd and going to sleep and is reacquired afterwards.
sleep until a condition gets true. The condition is checked under the lock. This is expected to be called with the lock taken.
Parameters
Description
The process is put to sleep (TASK_UNINTERRUPTIBLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
This is supposed to be called while holding the lock. The lock is dropped before going to sleep and is reacquired afterwards.
sleep until a condition gets true. The condition is checked under the lock. This is expected to be called with the lock taken.
Parameters
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
This is supposed to be called while holding the lock. The lock is dropped before invoking the cmd and going to sleep and is reacquired afterwards.
The macro will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.
sleep until a condition gets true. The condition is checked under the lock. This is expected to be called with the lock taken.
Parameters
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or signal is received. The condition is checked each time the waitqueue wq is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
This is supposed to be called while holding the lock. The lock is dropped before going to sleep and is reacquired afterwards.
The macro will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.
sleep until a condition gets true or a timeout elapses. The condition is checked under the lock. This is expected to be called with the lock taken.
Parameters
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or signal is received. The condition is checked each time the waitqueue wq is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
This is supposed to be called while holding the lock. The lock is dropped before going to sleep and is reacquired afterwards.
The function returns 0 if the timeout elapsed, -ERESTARTSYS if it was interrupted by a signal, and the remaining jiffies otherwise if the condition evaluated to true before the timeout elapsed.
wait for a bit to be cleared
Parameters
Description
There is a standard hashed waitqueue table for generic use. This is the part of the hashtable’s accessor API that waits on a bit. For instance, if one were to have waiters on a bitflag, one would call wait_on_bit() in threads waiting for the bit to clear. One uses wait_on_bit() where one is waiting for the bit to clear, but has no intention of setting it. Returned value will be zero if the bit was cleared, or non-zero if the process received a signal and the mode permitted wakeup on that signal.
wait for a bit to be cleared
Parameters
Description
Use the standard hashed waitqueue table to wait for a bit to be cleared. This is similar to wait_on_bit(), but calls io_schedule() instead of schedule() for the actual waiting.
Returned value will be zero if the bit was cleared, or non-zero if the process received a signal and the mode permitted wakeup on that signal.
wait for a bit to be cleared or a timeout elapses
Parameters
Description
Use the standard hashed waitqueue table to wait for a bit to be cleared. This is similar to wait_on_bit(), except also takes a timeout parameter.
Returned value will be zero if the bit was cleared before the timeout elapsed, or non-zero if the timeout elapsed or process received a signal and the mode permitted wakeup on that signal.
wait for a bit to be cleared
Parameters
Description
Use the standard hashed waitqueue table to wait for a bit to be cleared, and allow the waiting action to be specified. This is like wait_on_bit() but allows fine control of how the waiting is done.
Returned value will be zero if the bit was cleared, or non-zero if the process received a signal and the mode permitted wakeup on that signal.
wait for a bit to be cleared, when wanting to set it
Parameters
Description
There is a standard hashed waitqueue table for generic use. This is the part of the hashtable’s accessor API that waits on a bit when one intends to set it, for instance, trying to lock bitflags. For instance, if one were to have waiters trying to set bitflag and waiting for it to clear before setting it, one would call wait_on_bit() in threads waiting to be able to set the bit. One uses wait_on_bit_lock() where one is waiting for the bit to clear with the intention of setting it, and when done, clearing it.
Returns zero if the bit was (eventually) found to be clear and was set. Returns non-zero if a signal was delivered to the process and the mode allows that signal to wake the process.
wait for a bit to be cleared, when wanting to set it
Parameters
Description
Use the standard hashed waitqueue table to wait for a bit to be cleared and then to atomically set it. This is similar to wait_on_bit(), but calls io_schedule() instead of schedule() for the actual waiting.
Returns zero if the bit was (eventually) found to be clear and was set. Returns non-zero if a signal was delivered to the process and the mode allows that signal to wake the process.
wait for a bit to be cleared, when wanting to set it
Parameters
Description
Use the standard hashed waitqueue table to wait for a bit to be cleared and then to set it, and allow the waiting action to be specified. This is like wait_on_bit() but allows fine control of how the waiting is done.
Returns zero if the bit was (eventually) found to be clear and was set. Returns non-zero if a signal was delivered to the process and the mode allows that signal to wake the process.
Wait for an atomic_t to become 0
Parameters
Description
Wait for an atomic_t to become 0. We abuse the bit-wait waitqueue table for the purpose of getting a waitqueue, but we set the key to a bit number outside of the target ‘word’.
wake up threads blocked on a waitqueue.
Parameters
Description
It may be assumed that this function implies a write memory barrier before changing the task state if and only if any tasks are woken up.
wake up threads blocked on a waitqueue.
Parameters
Description
The sync wakeup differs that the waker knows that it will schedule away soon, so while the target thread will be woken up, it will not be migrated to another CPU - ie. the two threads are ‘synchronized’ with each other. This can prevent needless bouncing between CPUs.
On UP it can prevent extra preemption.
It may be assumed that this function implies a write memory barrier before changing the task state if and only if any tasks are woken up.
clean up after waiting in a queue
Parameters
Description
Sets current thread back to running state and removes the wait descriptor from the given waitqueue if still queued.
wake up a waiter on a bit
Parameters
Description
There is a standard hashed waitqueue table for generic use. This is the part of the hashtable’s accessor API that wakes up waiters on a bit. For instance, if one were to have waiters on a bitflag, one would call wake_up_bit() after clearing the bit.
In order for this to function properly, as it uses waitqueue_active() internally, some kind of memory barrier must be done prior to calling this. Typically, this will be smp_mb__after_atomic(), but in some cases where bitflags are manipulated non-atomically under a lock, one may need to use a less regular barrier, such fs/inode.c’s smp_mb(), because spin_unlock() does not guarantee a memory barrier.
Wake up a waiter on a atomic_t
Parameters
Description
Wake up anyone waiting for the atomic_t to go to zero.
Abuse the bit-waker function and its waitqueue hash table set (the atomic_t check is done by the waiter’s wake function, not the by the waker itself).
Set a ktime_t variable from a seconds/nanoseconds value
Parameters
Return
The ktime_t representation of the value.
Compares two ktime_t variables for less, greater or equal
Parameters
Return
Compare if a ktime_t value is bigger than another one.
Parameters
Return
true if cmp1 happened after cmp2.
Compare if a ktime_t value is smaller than another one.
Parameters
Return
true if cmp1 happened before cmp2.
convert a ktime_t variable to timespec format only if the variable contains data
Parameters
Return
true if there was a successful conversion, false if kt was 0.
convert a ktime_t variable to timespec64 format only if the variable contains data
Parameters
Return
true if there was a successful conversion, false if kt was 0.
the basic hrtimer structure
Definition
struct hrtimer {
struct timerqueue_node node;
ktime_t _softexpires;
enum hrtimer_restart (* function) (struct hrtimer *);
struct hrtimer_clock_base * base;
u8 state;
u8 is_rel;
};
Members
Description
The hrtimer structure must be initialized by hrtimer_init()
simple sleeper structure
Definition
struct hrtimer_sleeper {
struct hrtimer timer;
struct task_struct * task;
};
Members
Description
task is set to NULL, when the timer expires.
the timer base for a specific clock
Definition
struct hrtimer_clock_base {
struct hrtimer_cpu_base * cpu_base;
int index;
clockid_t clockid;
struct timerqueue_head active;
ktime_t (* get_time) (void);
ktime_t offset;
};
Members
(re)start an hrtimer on the current CPU
Parameters
forward the timer expiry so it expires after now
Parameters
Description
Forward the timer expiry so it will expire after the current time of the hrtimer clock base. Returns the number of overruns.
Can be safely called from the callback function of timer. If called from other contexts timer must neither be enqueued nor running the callback and the caller needs to take care of serialization.
Note
This only updates the timer expiry value and does not requeue the timer.
forward the timer expiry
Parameters
Description
Forward the timer expiry so it will expire in the future. Returns the number of overruns.
Can be safely called from the callback function of timer. If called from other contexts timer must neither be enqueued nor running the callback and the caller needs to take care of serialization.
Note
This only updates the timer expiry value and does not requeue the timer.
(re)start an hrtimer on the current CPU
Parameters
Parameters
Return
0 when the timer was not active 1 when the timer was active
Parameters
Return
0 when the timer was not active 1 when the timer was active
get remaining time for the timer
Parameters
initialize a timer to the given clock
Parameters
sleep until timeout
Parameters
Description
Make the current task sleep until the given expiry time has elapsed. The routine will return immediately unless the current task state has been set (see set_current_state()).
The delta argument gives the kernel the freedom to schedule the actual wakeup to a time that is both power and performance friendly. The kernel give the normal best effort behavior for “expires**+**delta”, but may decide to fire the timer earlier, but no earlier than expires.
You can set the task state as follows -
TASK_UNINTERRUPTIBLE - at least timeout time is guaranteed to pass before the routine returns unless the current task is explicitly woken up, (e.g. by wake_up_process()).
TASK_INTERRUPTIBLE - the routine may return early if a signal is delivered to the current task or the current task is explicitly woken up.
The current task state is guaranteed to be TASK_RUNNING when this routine returns.
Returns 0 when the timer has expired. If the task was woken before the timer expired by a signal (only possible in state TASK_INTERRUPTIBLE) or by an explicit wakeup, it returns -EINTR.
sleep until timeout
Parameters
Description
Make the current task sleep until the given expiry time has elapsed. The routine will return immediately unless the current task state has been set (see set_current_state()).
You can set the task state as follows -
TASK_UNINTERRUPTIBLE - at least timeout time is guaranteed to pass before the routine returns unless the current task is explicitly woken up, (e.g. by wake_up_process()).
TASK_INTERRUPTIBLE - the routine may return early if a signal is delivered to the current task or the current task is explicitly woken up.
The current task state is guaranteed to be TASK_RUNNING when this routine returns.
Returns 0 when the timer has expired. If the task was woken before the timer expired by a signal (only possible in state TASK_INTERRUPTIBLE) or by an explicit wakeup, it returns -EINTR.
A struct for workqueue attributes.
Definition
struct workqueue_attrs {
int nice;
cpumask_var_t cpumask;
bool no_numa;
};
Members
disable NUMA affinity
Unlike other fields, no_numa isn’t a property of a worker_pool. It only modifies how apply_workqueue_attrs() select pools and thus doesn’t participate in pool hash calculations or equality comparisons.
Description
This can be used to change attributes of an unbound workqueue.
Find out whether a work item is currently pending
Parameters
Find out whether a delayable work item is currently pending
Parameters
allocate a workqueue
Parameters
Description
Allocate a workqueue with the specified parameters. For detailed information on WQ_* flags, please refer to Documentation/core-api/workqueue.rst.
The __lock_name macro dance is to guarantee that single lock_class_key doesn’t end up with different namesm, which isn’t allowed by lockdep.
Return
Pointer to the allocated workqueue on success, NULL on failure.
allocate an ordered workqueue
Parameters
Description
Allocate an ordered workqueue. An ordered workqueue executes at most one work item at any given time in the queued order. They are implemented as unbound workqueues with max_active of one.
Return
Pointer to the allocated workqueue on success, NULL on failure.
queue work on a workqueue
Parameters
Description
Returns false if work was already on a queue, true otherwise.
We queue the work to the CPU on which it was submitted, but if the CPU dies it can be processed by another CPU.
queue work on a workqueue after delay
Parameters
Description
Equivalent to queue_delayed_work_on() but tries to use the local CPU.
modify delay of or queue a delayed work
Parameters
Description
mod_delayed_work_on() on local CPU.
put work task on a specific cpu
Parameters
Description
This puts a job on a specific cpu
put work task in global workqueue
Parameters
Description
Returns false if work was already on the kernel-global workqueue and true otherwise.
This puts a job in the kernel-global workqueue if it was not already queued and leaves it in the same position on the kernel-global workqueue otherwise.
ensure that any scheduled work has run to completion.
Parameters
Description
Forces execution of the kernel-global workqueue and blocks until its completion.
Think twice before calling this function! It’s very easy to get into trouble if you don’t take great care. Either of the following situations will lead to deadlock:
One of the work items currently on the workqueue needs to acquire a lock held by your code or its caller.
Your code is running in the context of a work routine.
They will be detected by lockdep when they occur, but the first might not occur very often. It depends on what work items are on the workqueue and what locks they need, which you have no control over.
In most situations flushing the entire workqueue is overkill; you merely need to know that a particular work item isn’t queued and isn’t running. In such cases you should use cancel_delayed_work_sync() or cancel_work_sync() instead.
queue work in global workqueue on CPU after delay
Parameters
Description
After waiting for a given time this puts a job in the kernel-global workqueue on the specified CPU.
put work task in global workqueue after delay
Parameters
Description
After waiting for a given time this puts a job in the kernel-global workqueue.
queue work on specific cpu
Parameters
Description
We queue the work to a specific CPU, the caller must ensure it can’t go away.
Return
false if work was already on a queue, true otherwise.
queue work on specific CPU after delay
Parameters
Return
false if work was already on a queue, true otherwise. If delay is zero and dwork is idle, it will be scheduled for immediate execution.
modify delay of or queue a delayed work on specific CPU
Parameters
Description
If dwork is idle, equivalent to queue_delayed_work_on(); otherwise, modify dwork‘s timer so that it expires after delay. If delay is zero, work is guaranteed to be scheduled immediately regardless of its current state.
Return
false if dwork was idle and queued, true if dwork was pending and its timer was modified.
This function is safe to call from any context including IRQ handler. See try_to_grab_pending() for details.
ensure that any scheduled work has run to completion.
Parameters
Description
This function sleeps until all work items which were queued on entry have finished execution, but it is not livelocked by new incoming ones.
drain a workqueue
Parameters
Description
Wait until the workqueue becomes empty. While draining is in progress, only chain queueing is allowed. IOW, only currently pending or running work items on wq can queue further work items on it. wq is flushed repeatedly until it becomes empty. The number of flushing is determined by the depth of chaining and should be relatively short. Whine if it takes too long.
wait for a work to finish executing the last queueing instance
Parameters
Description
Wait until work has finished execution. work is guaranteed to be idle on return if it hasn’t been requeued since flush started.
Return
true if flush_work() waited for the work to finish execution, false if it was already idle.
cancel a work and wait for it to finish
Parameters
Description
Cancel work and wait for its execution to finish. This function can be used even if the work re-queues itself or migrates to another workqueue. On return from this function, work is guaranteed to be not pending or executing on any CPU.
cancel_work_sync(delayed_work->work) must not be used for delayed_work’s. Use cancel_delayed_work_sync() instead.
The caller must ensure that the workqueue on which work was last queued can’t be destroyed before this function returns.
Return
true if work was pending, false otherwise.
wait for a dwork to finish executing the last queueing
Parameters
Description
Delayed timer is cancelled and the pending work is queued for immediate execution. Like flush_work(), this function only considers the last queueing instance of dwork.
Return
true if flush_work() waited for the work to finish execution, false if it was already idle.
cancel a delayed work
Parameters
Description
Kill off a pending delayed_work.
Return
true if dwork was pending and canceled; false if it wasn’t pending.
Note
The work callback function may still be running on return, unless it returns true and the work doesn’t re-arm itself. Explicitly flush or use cancel_delayed_work_sync() to wait on it.
This function is safe to call from any context including IRQ handler.
cancel a delayed work and wait for it to finish
Parameters
Description
This is cancel_work_sync() for delayed works.
Return
true if dwork was pending, false otherwise.
reliably execute the routine with user context
Parameters
Description
Executes the function immediately if process context is available, otherwise schedules the function for delayed execution.
Return
safely terminate a workqueue
Parameters
Description
Safely destroy a workqueue. All work currently pending will be done first.
adjust max_active of a workqueue
Parameters
Description
Set max_active of wq to max_active.
Context
Don’t call from IRQ context.
test whether a workqueue is congested
Parameters
Description
Test whether wq‘s cpu workqueue for cpu is congested. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging.
If cpu is WORK_CPU_UNBOUND, the test is performed on the local CPU. Note that both per-cpu and unbound workqueues may be associated with multiple pool_workqueues which have separate congested states. A workqueue being congested on one CPU doesn’t mean the workqueue is also contested on other CPUs / NUMA nodes.
Return
true if congested, false otherwise.
test whether a work is currently pending or running
Parameters
Description
Test whether work is currently pending or running. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging.
Return
OR’d bitmask of WORK_BUSY_* bits.
run a function in thread context on a particular cpu
Parameters
Description
It is up to the caller to ensure that the cpu doesn’t go offline. The caller must not hold any locks which would prevent fn from completing.
Return
The value fn returns.
Wait for TASK_STOPPED or TASK_TRACED
Parameters
Description
Handle sys_wait4() work for p in state TASK_STOPPED or TASK_TRACED.
Context
read_lock(tasklist_lock), which is released if return value is non-zero. Also, grabs and releases p->sighand->siglock.
Return
0 if wait condition didn’t exist and search for other wait conditions should continue. Non-zero return, -errno on failure and p‘s pid on success, implies that tasklist_lock is released and wait condition search should terminate.
set jobctl pending bits
Parameters
Description
Clear mask from task->jobctl. mask must be subset of JOBCTL_PENDING_MASK | JOBCTL_STOP_CONSUME | JOBCTL_STOP_SIGMASK | JOBCTL_TRAPPING. If stop signo is being set, the existing signo is cleared. If task is already being killed or exiting, this function becomes noop.
Context
Must be called with task->sighand->siglock held.
Return
true if mask is set, false if made noop because task was dying.
clear jobctl trapping bit
Parameters
Description
If JOBCTL_TRAPPING is set, a ptracer is waiting for us to enter TRACED. Clear it and wake up the ptracer. Note that we don’t need any further locking. task->siglock guarantees that task->parent points to the ptracer.
Context
Must be called with task->sighand->siglock held.
clear jobctl pending bits
Parameters
Description
Clear mask from task->jobctl. mask must be subset of JOBCTL_PENDING_MASK. If JOBCTL_STOP_PENDING is being cleared, other STOP bits are cleared together.
If clearing of mask leaves no stop or trap pending, this function calls task_clear_jobctl_trapping().
Context
Must be called with task->sighand->siglock held.
participate in a group stop
Parameters
Description
task has JOBCTL_STOP_PENDING set and is participating in a group stop. Group stop states are cleared and the group stop count is consumed if JOBCTL_STOP_CONSUME was set. If the consumption completes the group stop, the appropriate ``SIGNAL_``* flags are set.
Context
Must be called with task->sighand->siglock held.
Return
true if group stop completion should be notified to the parent, false otherwise.
schedule trap to notify ptracer
Parameters
Description
This function schedules sticky ptrace trap which is cleared on the next TRAP_STOP to notify ptracer of an event. t must have been seized by ptracer.
If t is running, STOP trap will be taken. If trapped for STOP and ptracer is listening for events, tracee is woken up so that it can re-trap for the new event. If trapped otherwise, STOP trap will be eventually taken without returning to userland after the existing traps are finished by PTRACE_CONT.
Context
Must be called with task->sighand->siglock held.
notify parent of stopped/continued state change
Parameters
Description
Notify tsk‘s parent that the stopped/continued state has changed. If for_ptracer is false, tsk‘s group leader notifies to its real parent. If true, tsk reports to tsk->parent which should be the ptracer.
Context
Must be called with tasklist_lock at least read locked.
handle group stop for SIGSTOP and other stop signals
Parameters
Description
If JOBCTL_STOP_PENDING is not set yet, initiate group stop with signr and participate in it. If already set, participate in the existing group stop. If participated in a group stop (and thus slept), true is returned with siglock released.
If ptraced, this function doesn’t handle stop itself. Instead, JOBCTL_TRAP_STOP is scheduled and false is returned with siglock untouched. The caller must ensure that INTERRUPT trap handling takes places afterwards.
Context
Must be called with current->sighand->siglock held, which is released on true return.
Return
false if group stop is already cancelled or ptrace trap is scheduled. true if participated in group stop.
take care of ptrace jobctl traps
Parameters
Description
When PT_SEIZED, it’s used for both group stop and explicit SEIZE/INTERRUPT traps. Both generate PTRACE_EVENT_STOP trap with accompanying siginfo. If stopped, lower eight bits of exit_code contain the stop signal; otherwise, SIGTRAP.
When !PT_SEIZED, it’s used only for group stop trap with stop signal number as exit_code and no siginfo.
Context
Must be called with current->sighand->siglock held, which may be released and re-acquired before returning with intervening sleep.
Parameters
Description
This function should be called when a signal has successfully been delivered. It updates the blocked signals accordingly (ksig->ka.sa.sa_mask is always blocked, and the signal itself is blocked unless SA_NODEFER is set in ksig->ka.sa.sa_flags. Tracing is notified.
restart a system call
Parameters
change current->blocked mask
Parameters
Description
It is wrong to change ->blocked directly, this helper should be used to ensure the process can’t miss a shared signal we are going to block.
change the list of currently blocked signals
Parameters
examine a pending signal that has been raised while blocked
Parameters
wait for queued signals specified in which
Parameters
synchronously wait for queued signals specified in uthese
Parameters
send a signal to a process
Parameters
send signal to one specific thread
Parameters
Description
This syscall also checks the tgid and returns -ESRCH even if the PID exists but it’s not belonging to the target process anymore. This method solves the problem of threads exiting and PIDs getting reused.
send signal to one specific task
Parameters
Description
Send a signal to only one task, even if it’s a CLONE_THREAD task.
send signal information to a signal
Parameters
examine pending signals
Parameters
examine and change blocked signals
Parameters
Description
Some platforms have their own version with special arguments; others support only sys_rt_sigprocmask.
alter an action taken by a process
Parameters
replace the signal mask for a value with the unewset value until a signal is received
Parameters
create a kthread on the current node
Parameters
Description
This macro will create a kthread on the current node, leaving it in the stopped state. This is just a helper for kthread_create_on_node(); see the documentation there for more details.
create and wake a thread.
Parameters
Description
Convenient wrapper for kthread_create() followed by wake_up_process(). Returns the kthread or ERR_PTR(-ENOMEM).
should this kthread return now?
Parameters
Description
When someone calls kthread_stop() on your kthread, it will be woken and this will return true. You should then return, and your return value will be passed through to kthread_stop().
should this kthread park now?
Parameters
Description
When someone calls kthread_park() on your kthread, it will be woken and this will return true. You should then do the necessary cleanup and call kthread_parkme()
Similar to kthread_should_stop(), but this keeps the thread alive and in a park position. kthread_unpark() “restarts” the thread and calls the thread function again.
should this freezable kthread return now?
Parameters
Description
kthread_should_stop() for freezable kthreads, which will enter refrigerator if necessary. This function is safe from kthread_stop() / freezer deadlock and freezable kthreads should use this function instead of calling try_to_freeze() directly.
create a kthread.
Parameters
Description
This helper function creates and names a kernel thread. The thread will be stopped: use wake_up_process() to start it. See also kthread_run(). The new thread has SCHED_NORMAL policy and is affine to all CPUs.
If thread is going to be bound on a particular cpu, give its node in node, to get NUMA affinity for kthread stack, or else give NUMA_NO_NODE. When woken, the thread will run threadfn() with data as its argument. threadfn() can either call do_exit() directly if it is a standalone thread for which no one will call kthread_stop(), or return when ‘kthread_should_stop()‘ is true (which means kthread_stop() has been called). The return value should be zero or a negative error number; it will be passed to kthread_stop().
Returns a task_struct or ERR_PTR(-ENOMEM) or ERR_PTR(-EINTR).
bind a just-created kthread to a cpu.
Parameters
Description
This function is equivalent to set_cpus_allowed(), except that cpu doesn’t need to be online, and the thread must be stopped (i.e., just returned from kthread_create()).
unpark a thread created by kthread_create().
Parameters
Description
Sets kthread_should_park() for k to return false, wakes it, and waits for it to return. If the thread is marked percpu then its bound to the cpu again.
park a thread created by kthread_create().
Parameters
Description
Sets kthread_should_park() for k to return true, wakes it, and waits for it to return. This can also be called after kthread_create() instead of calling wake_up_process(): the thread will park without calling threadfn().
Returns 0 if the thread is parked, -ENOSYS if the thread exited. If called by the kthread itself just the park bit is set.
stop a thread created by kthread_create().
Parameters
Description
Sets kthread_should_stop() for k to return true, wakes it, and waits for it to exit. This can also be called after kthread_create() instead of calling wake_up_process(): the thread will exit without calling threadfn().
If threadfn() may call do_exit() itself, the caller must ensure task_struct can’t go away.
Returns the result of threadfn(), or -EINTR if wake_up_process() was never called.
kthread function to process kthread_worker
Parameters
Description
This function implements the main cycle of kthread worker. It processes work_list until it is stopped with kthread_stop(). It sleeps when the queue is empty.
The works are not allowed to keep any locks, disable preemption or interrupts when they finish. There is defined a safe point for freezing when one work finishes and before a new one is started.
Also the works must not be handled by more than one worker at the same time, see also kthread_queue_work().
create a kthread worker
Parameters
Description
Returns a pointer to the allocated worker on success, ERR_PTR(-ENOMEM) when the needed structures could not get allocated, and ERR_PTR(-EINTR) when the worker was SIGKILLed.
create a kthread worker and bind it it to a given CPU and the associated NUMA node.
Parameters
Description
Use a valid CPU number if you want to bind the kthread worker to the given CPU and the associated NUMA node.
A good practice is to add the cpu number also into the worker name. For example, use kthread_create_worker_on_cpu(cpu, “helper/d”, cpu).
Returns a pointer to the allocated worker on success, ERR_PTR(-ENOMEM) when the needed structures could not get allocated, and ERR_PTR(-EINTR) when the worker was SIGKILLed.
queue a kthread_work
Parameters
Description
Queue work to work processor task for async execution. task must have been created with kthread_worker_create(). Returns true if work was successfully queued, false if it was already pending.
Reinitialize the work if it needs to be used by another worker. For example, when the worker was stopped and started again.
callback that queues the associated kthread delayed work when the timer expires.
Parameters
Description
The format of the function is defined by struct timer_list. It should have been called from irqsafe timer with irq already off.
queue the associated kthread work after a delay.
Parameters
Description
If the work has not been pending it starts a timer that will queue the work after the given delay. If delay is zero, it queues the work immediately.
Return
false if the work has already been pending. It means that either the timer was running or the work was queued. It returns true otherwise.
flush a kthread_work
Parameters
Description
If work is queued or executing, wait for it to finish execution.
modify delay of or queue a kthread delayed work
Parameters
Description
If dwork is idle, equivalent to kthread_queue_delayed_work(). Otherwise, modify dwork‘s timer so that it expires after delay. If delay is zero, work is guaranteed to be queued immediately.
Return
true if dwork was pending and its timer was modified, false otherwise.
A special case is when the work is being canceled in parallel. It might be caused either by the real kthread_cancel_delayed_work_sync() or yet another kthread_mod_delayed_work() call. We let the other command win and return false here. The caller is supposed to synchronize these operations a reasonable way.
This function is safe to call from any context including IRQ handler. See __kthread_cancel_work() and kthread_delayed_work_timer_fn() for details.
cancel a kthread work and wait for it to finish
Parameters
Description
Cancel work and wait for its execution to finish. This function can be used even if the work re-queues itself. On return from this function, work is guaranteed to be not pending or executing on any CPU.
kthread_cancel_work_sync(delayed_work->work) must not be used for delayed_work’s. Use kthread_cancel_delayed_work_sync() instead.
The caller must ensure that the worker on which work was last queued can’t be destroyed before this function returns.
Return
true if work was pending, false otherwise.
cancel a kthread delayed work and wait for it to finish.
Parameters
Description
This is kthread_cancel_work_sync() for delayed works.
Return
true if dwork was pending, false otherwise.
flush all current works on a kthread_worker
Parameters
Description
Wait until all currently executing or pending works on worker are finished.
destroy a kthread worker
Parameters
Description
Flush and destroy worker. The simple flush is enough because the kthread worker API is used only in trivial scenarios. There are no multi-step state machines needed.
generate and return the path associated with a given kobj and kset pair.
Parameters
Description
The result must be freed by the caller with kfree().
Set the name of a kobject
Parameters
Description
This sets the name of the kobject. If you have already added the kobject to the system, you must call kobject_rename() in order to change the name of the kobject.
initialize a kobject structure
Parameters
Description
This function will properly initialize a kobject such that it can then be passed to the kobject_add() call.
After this function is called, the kobject MUST be cleaned up by a call to kobject_put(), not by a call to kfree directly to ensure that all of the memory is cleaned up properly.
the main kobject add function
Parameters
Description
The kobject name is set and added to the kobject hierarchy in this function.
If parent is set, then the parent of the kobj will be set to it. If parent is NULL, then the parent of the kobj will be set to the kobject associated with the kset assigned to this kobject. If no kset is assigned to the kobject, then the kobject will be located in the root of the sysfs tree.
If this function returns an error, kobject_put() must be called to properly clean up the memory associated with the object. Under no instance should the kobject that is passed to this function be directly freed with a call to kfree(), that can leak memory.
Note, no “add” uevent will be created with this call, the caller should set up all of the necessary sysfs files for the object and then call kobject_uevent() with the UEVENT_ADD parameter to ensure that userspace is properly notified of this kobject’s creation.
initialize a kobject structure and add it to the kobject hierarchy
Parameters
Description
This function combines the call to kobject_init() and kobject_add(). The same type of error handling after a call to kobject_add() and kobject lifetime rules are the same here.
change the name of an object
Parameters
Description
It is the responsibility of the caller to provide mutual exclusion between two different calls of kobject_rename on the same kobject and to ensure that new_name is valid and won’t conflict with other kobjects.
move object to another parent
Parameters
unlink kobject from hierarchy.
Parameters
increment refcount for object.
Parameters
decrement refcount for object.
Parameters
Description
Decrement the refcount, and if 0, call kobject_cleanup().
create a struct kobject dynamically and register it with sysfs
Parameters
Description
This function creates a kobject structure dynamically and registers it with sysfs. When you are finished with this structure, call kobject_put() and the structure will be dynamically freed when it is no longer being used.
If the kobject was not able to be created, NULL will be returned.
initialize and add a kset.
Parameters
remove a kset.
Parameters
search for object in kset.
Parameters
Description
Lock kset via kset->subsys, and iterate over kset->list, looking for a matching kobject. If matching object is found take a reference and return the object.
create a struct kset dynamically and add it to sysfs
Parameters
Description
This function creates a kset structure dynamically and registers it with sysfs. When you are finished with this structure, call kset_unregister() and the structure will be dynamically freed when it is no longer being used.
If the kset was not able to be created, NULL will be returned.
return bits 32-63 of a number
Parameters
Description
A basic shift-right of a 64- or 32-bit quantity. Use this to suppress the “right shift count >= width of type” warning when that quantity is 32-bits.
return bits 0-31 of a number
Parameters
annotation for functions that can sleep
Parameters
Description
this macro will print a stack trace if it is executed in an atomic context (spinlock, irq-handler, ...).
This is a useful debugging help to be able to catch problems early and not be bitten later when the calling function happens to sleep when it is not supposed to.
return absolute value of an argument
Parameters
Return
an absolute value of x.
“scale” a value into range [0, ep_ro)
Parameters
Description
Perform a “reciprocal multiplication” in order to “scale” a value into range [0, ep_ro), where the upper interval endpoint is right-open. This is useful, e.g. for accessing a index of an array containing ep_ro elements, for example. Think of it as sort of modulus, only that the result isn’t that of modulo. ;) Note that if initial input is a small value, then result will return 0.
Return
a result based on val in interval [0, ep_ro).
convert a string to an unsigned long
Parameters
Description
Returns 0 on success, -ERANGE on overflow and -EINVAL on parsing error. Used as a replacement for the obsolete simple_strtoull. Return code must be checked.
convert a string to a long
Parameters
Description
Returns 0 on success, -ERANGE on overflow and -EINVAL on parsing error. Used as a replacement for the obsolete simple_strtoull. Return code must be checked.
printf formatting in the ftrace buffer
Parameters
Note
This function allows a kernel developer to debug fast path sections that printk is not appropriate for. By scattering in various printk like tracing in the code, a developer can quickly see where problems are occurring.
This is intended as a debugging tool for the developer only. Please refrain from leaving trace_printks scattered around in your code. (Extra memory is used for special buffers that are allocated when trace_printk() is used)
A little optization trick is done here. If there’s only one argument, there’s no need to scan the string for printf formats. The trace_puts() will suffice. But how can we take advantage of using trace_puts() when trace_printk() has only one argument? By stringifying the args and checking the size we can tell whether or not there are args. __stringify((__VA_ARGS__)) will turn into “()0” with a size of 3 when there are no args, anything else will be bigger. All we need to do is define a string to this, and then take its size and compare to 3. If it’s bigger, use do_trace_printk() otherwise, optimize it to trace_puts(). Then just let gcc optimize the rest.
write a string into the ftrace buffer
Parameters
Note
This is similar to trace_printk() but is made for those really fast paths that a developer wants the least amount of “Heisenbug” affects, where the processing of the print format is still too much.
This function allows a kernel developer to debug fast path sections that printk is not appropriate for. By scattering in various printk like tracing in the code, a developer can quickly see where problems are occurring.
This is intended as a debugging tool for the developer only. Please refrain from leaving trace_puts scattered around in your code. (Extra memory is used for special buffers that are allocated when trace_puts() is used)
Return
return the minimum that is _not_ zero, unless both are zero
Parameters
return a value clamped to a given range with strict typechecking
Parameters
Description
This macro does strict typechecking of lo/hi to make sure they are of the same type as val. See the unnecessary pointer comparisons.
return a value clamped to a given range using a given type
Parameters
Description
This macro does no typechecking and uses temporary variables of type ‘type’ to make all the comparisons.
return a value clamped to a given range using val’s type
Parameters
Description
This macro does no typechecking and uses temporary variables of whatever type the input argument ‘val’ is. This is useful when val is an unsigned type and min and max are literals that will otherwise be assigned a signed integer type.
cast a member of a structure out to the containing structure
Parameters
print a kernel message
Parameters
Description
This is printk(). It can be called from any context. We want it to work.
We try to grab the console_lock. If we succeed, it’s easy - we log the output and call the console drivers. If we fail to get the semaphore, we place the output into the log buffer and return. The current holder of the console_sem will notice the new output in console_unlock(); and will send it to the consoles before releasing the lock.
One effect of this deferred printing is that code which calls printk() and then changes console_loglevel may break. This is because console_loglevel is inspected when the actual printing occurs.
See also: printf(3)
See the vsnprintf() documentation for format string extensions over C99.
lock the console system for exclusive use.
Parameters
Description
Acquires a lock which guarantees that the caller has exclusive access to the console system and the console_drivers list.
Can sleep, returns nothing.
try to lock the console system for exclusive use.
Parameters
Description
Try to acquire a lock which guarantees that the caller has exclusive access to the console system and the console_drivers list.
returns 1 on success, and 0 on failure to acquire the lock.
unlock the console system
Parameters
Description
Releases the console_lock which the caller holds on the console system and the console driver list.
While the console_lock was held, console output may have been buffered by printk(). If this is the case, console_unlock(); emits the output prior to releasing the lock.
If there is output waiting, we wake /dev/kmsg and syslog() users.
console_unlock(); may be called from any context.
yield the CPU if required
Parameters
Description
If the console code is currently allowed to sleep, and if this CPU should yield the CPU to another task, do so here.
Must be called within console_lock();.
caller-controlled printk ratelimiting
Parameters
Description
printk_timed_ratelimit() returns true if more than interval_msecs milliseconds have elapsed since the last time printk_timed_ratelimit() returned true.
register a kernel log dumper.
Parameters
Description
Adds a kernel log dumper to the system. The dump callback in the structure will be called when the kernel oopses or panics and must be set. Returns zero on success and -EINVAL or -EBUSY otherwise.
unregister a kmsg dumper.
Parameters
Description
Removes a dump device from the system. Returns zero on success and -EINVAL otherwise.
retrieve one kmsg log line
Parameters
Description
Start at the beginning of the kmsg buffer, with the oldest kmsg record, and copy one record into the provided buffer.
Consecutive calls will return the next available record moving towards the end of the buffer with the youngest messages.
A return value of FALSE indicates that there are no more records to read.
copy kmsg log lines
Parameters
Description
Start at the end of the kmsg buffer and fill the provided buffer with as many of the the youngest kmsg records that fit into it. If the buffer is large enough, all available kmsg records will be copied with a single call.
Consecutive calls will fill the buffer with the next block of available older records, not including the earlier retrieved ones.
A return value of FALSE indicates that there are no more records to read.
reset the interator
Parameters
Description
Reset the dumper’s iterator so that kmsg_dump_get_line() and kmsg_dump_get_buffer() can be called again and used multiple times within the same dumper.:c:func:dump() callback.
halt the system
Parameters
Description
Display a message, then perform cleanups.
This function never returns.
Parameters
Description
If something bad has gone wrong, you’ll want lockdebug_ok = false, but for some notewortht-but-not-corrupting cases, it can be set to true.
initialize a sleep-RCU structure
Parameters
Description
Must invoke this on a given srcu_struct before passing that srcu_struct to any other function. Each srcu_struct represents a separate domain of SRCU protection.
deconstruct a sleep-RCU structure
Parameters
Description
Must invoke this after you are finished using a given srcu_struct that was initialized via init_srcu_struct(), else you leak memory.
wait for prior SRCU read-side critical-section completion
Parameters
Description
Wait for the count to drain to zero of both indexes. To avoid the possible starvation of synchronize_srcu(), it waits for the count of the index=((->completed & 1) ^ 1) to drain to zero at first, and then flip the completed and wait for the count of the other index.
Can block; must be called from process context.
Note that it is illegal to call synchronize_srcu() from the corresponding SRCU read-side critical section; doing so will result in deadlock. However, it is perfectly legal to call synchronize_srcu() on one srcu_struct from some other srcu_struct’s read-side critical section, as long as the resulting graph of srcu_structs is acyclic.
There are memory-ordering constraints implied by synchronize_srcu(). On systems with more than one CPU, when synchronize_srcu() returns, each CPU is guaranteed to have executed a full memory barrier since the end of its last corresponding SRCU-sched read-side critical section whose beginning preceded the call to synchronize_srcu(). In addition, each CPU having an SRCU read-side critical section that extends beyond the return from synchronize_srcu() is guaranteed to have executed a full memory barrier after the beginning of synchronize_srcu() and before the beginning of that SRCU read-side critical section. Note that these guarantees include CPUs that are offline, idle, or executing in user mode, as well as CPUs that are executing in the kernel.
Furthermore, if CPU A invoked synchronize_srcu(), which returned to its caller on CPU B, then both CPU A and CPU B are guaranteed to have executed a full memory barrier during the execution of synchronize_srcu(). This guarantee applies even if CPU A and CPU B are the same CPU, but again only if the system has more than one CPU.
Of course, these memory-ordering guarantees apply only when synchronize_srcu(), srcu_read_lock(), and srcu_read_unlock() are passed the same srcu_struct structure.
Brute-force SRCU grace period
Parameters
Description
Wait for an SRCU grace period to elapse, but be more aggressive about spinning rather than blocking when waiting.
Note that synchronize_srcu_expedited() has the same deadlock and memory-ordering properties as does synchronize_srcu().
Wait until all in-flight call_srcu() callbacks complete.
Parameters
return batches completed.
Parameters
Description
Report the number of batches, correlated with, but not necessarily precisely the same as, the number of grace periods that have elapsed.
inform RCU that current CPU is entering idle
Parameters
Description
Enter idle mode, in other words, -leave- the mode in which RCU read-side critical sections can occur. (Though RCU read-side critical sections can occur in irq handlers in idle, a possibility handled by irq_enter() and irq_exit().)
We crowbar the ->dynticks_nesting field to zero to allow for the possibility of usermode upcalls having messed up our count of interrupt nesting level during the prior busy period.
inform RCU that current CPU is leaving idle
Parameters
Description
Exit idle mode, in other words, -enter- the mode in which RCU read-side critical sections can occur.
We crowbar the ->dynticks_nesting field to DYNTICK_TASK_NEST to allow for the possibility of usermode upcalls messing up our count of interrupt nesting level during the busy period that is just now starting.
see if RCU thinks that the current CPU is idle
Parameters
Description
If the current CPU is in its idle loop and is neither in an interrupt or NMI handler, return true.
wait until an rcu-sched grace period has elapsed.
Parameters
Description
Control will return to the caller some time after a full rcu-sched grace period has elapsed, in other words after all currently executing rcu-sched read-side critical sections have completed. These read-side critical sections are delimited by rcu_read_lock_sched() and rcu_read_unlock_sched(), and may be nested. Note that preempt_disable(), local_irq_disable(), and so on may be used in place of rcu_read_lock_sched().
This means that all preempt_disable code sequences, including NMI and non-threaded hardware-interrupt handlers, in progress on entry will have completed before this primitive returns. However, this does not guarantee that softirq handlers will have completed, since in some kernels, these handlers can run in process context, and can block.
Note that this guarantee implies further memory-ordering guarantees. On systems with more than one CPU, when synchronize_sched() returns, each CPU is guaranteed to have executed a full memory barrier since the end of its last RCU-sched read-side critical section whose beginning preceded the call to synchronize_sched(). In addition, each CPU having an RCU read-side critical section that extends beyond the return from synchronize_sched() is guaranteed to have executed a full memory barrier after the beginning of synchronize_sched() and before the beginning of that RCU read-side critical section. Note that these guarantees include CPUs that are offline, idle, or executing in user mode, as well as CPUs that are executing in the kernel.
Furthermore, if CPU A invoked synchronize_sched(), which returned to its caller on CPU B, then both CPU A and CPU B are guaranteed to have executed a full memory barrier during the execution of synchronize_sched() – even if CPU A and CPU B are the same CPU (but again only if the system has more than one CPU).
This primitive provides the guarantees made by the (now removed) synchronize_kernel() API. In contrast, synchronize_rcu() only guarantees that rcu_read_lock() sections will have completed. In “classic RCU”, these two guarantees happen to be one and the same, but can differ in realtime RCU implementations.
wait until an rcu_bh grace period has elapsed.
Parameters
Description
Control will return to the caller some time after a full rcu_bh grace period has elapsed, in other words after all currently executing rcu_bh read-side critical sections have completed. RCU read-side critical sections are delimited by rcu_read_lock_bh() and rcu_read_unlock_bh(), and may be nested.
See the description of synchronize_sched() for more detailed information on memory ordering guarantees.
Snapshot current RCU state
Parameters
Description
Returns a cookie that is used by a later call to cond_synchronize_rcu() to determine whether or not a full grace period has elapsed in the meantime.
Conditionally wait for an RCU grace period
Parameters
Description
If a full RCU grace period has elapsed since the earlier call to get_state_synchronize_rcu(), just return. Otherwise, invoke synchronize_rcu() to wait for a full grace period.
Yes, this function does not take counter wrap into account. But counter wrap is harmless. If the counter wraps, we have waited for more than 2 billion grace periods (and way more on a 64-bit system!), so waiting for one additional grace period should be just fine.
Snapshot current RCU-sched state
Parameters
Description
Returns a cookie that is used by a later call to cond_synchronize_sched() to determine whether or not a full grace period has elapsed in the meantime.
Conditionally wait for an RCU-sched grace period
Parameters
Description
If a full RCU-sched grace period has elapsed since the earlier call to get_state_synchronize_sched(), just return. Otherwise, invoke synchronize_sched() to wait for a full grace period.
Yes, this function does not take counter wrap into account. But counter wrap is harmless. If the counter wraps, we have waited for more than 2 billion grace periods (and way more on a 64-bit system!), so waiting for one additional grace period should be just fine.
Wait until all in-flight call_rcu_bh() callbacks complete.
Parameters
Wait for in-flight call_rcu_sched() callbacks.
Parameters
wait until a grace period has elapsed.
Parameters
Description
Control will return to the caller some time after a full grace period has elapsed, in other words after all currently executing RCU read-side critical sections have completed. Note, however, that upon return from synchronize_rcu(), the caller might well be executing concurrently with new RCU read-side critical sections that began while synchronize_rcu() was waiting. RCU read-side critical sections are delimited by rcu_read_lock() and rcu_read_unlock(), and may be nested.
See the description of synchronize_sched() for more detailed information on memory ordering guarantees.
Wait until all in-flight call_rcu() callbacks complete.
Parameters
Description
Note that this primitive does not necessarily wait for an RCU grace period to complete. For example, if there are no RCU callbacks queued anywhere in the system, then rcu_barrier() is within its rights to return immediately, without waiting for anything, much less an RCU grace period.
might we be in RCU-sched read-side critical section?
Parameters
Description
If CONFIG_DEBUG_LOCK_ALLOC is selected, returns nonzero iff in an RCU-sched read-side critical section. In absence of CONFIG_DEBUG_LOCK_ALLOC, this assumes we are in an RCU-sched read-side critical section unless it can prove otherwise. Note that disabling of preemption (including disabling irqs) counts as an RCU-sched read-side critical section. This is useful for debug checks in functions that required that they be called within an RCU-sched read-side critical section.
Check debug_lockdep_rcu_enabled() to prevent false positives during boot and while lockdep is disabled.
Note that if the CPU is in the idle loop from an RCU point of view (ie: that we are in the section between rcu_idle_enter() and rcu_idle_exit()) then rcu_read_lock_held() returns false even if the CPU did an rcu_read_lock(). The reason for this is that RCU ignores CPUs that are in such a section, considering these as in extended quiescent state, so such a CPU is effectively never in an RCU read-side critical section regardless of what RCU primitives it invokes. This state of affairs is required — we need to keep an RCU-free window in idle where the CPU may possibly enter into low power mode. This way we can notice an extended quiescent state to other CPUs that started a grace period. Otherwise we would delay any grace period as long as we run in the idle task.
Similarly, we avoid claiming an SRCU read lock held if the current CPU is offline.
Expedite future RCU grace periods
Parameters
Description
After a call to this function, future calls to synchronize_rcu() and friends act as the corresponding synchronize_rcu_expedited() function had instead been called.
Cancel prior rcu_expedite_gp() invocation
Parameters
Description
Undo a prior call to rcu_expedite_gp(). If all prior calls to rcu_expedite_gp() are undone by a subsequent call to rcu_unexpedite_gp(), and if the rcu_expedited sysfs/boot parameter is not set, then all subsequent calls to synchronize_rcu() and friends will return to their normal non-expedited behavior.
might we be in RCU read-side critical section?
Parameters
Description
If CONFIG_DEBUG_LOCK_ALLOC is selected, returns nonzero iff in an RCU read-side critical section. In absence of CONFIG_DEBUG_LOCK_ALLOC, this assumes we are in an RCU read-side critical section unless it can prove otherwise. This is useful for debug checks in functions that require that they be called within an RCU read-side critical section.
Checks debug_lockdep_rcu_enabled() to prevent false positives during boot and while lockdep is disabled.
Note that rcu_read_lock() and the matching rcu_read_unlock() must occur in the same context, for example, it is illegal to invoke rcu_read_unlock() in process context if the matching rcu_read_lock() was invoked from within an irq handler.
Note that rcu_read_lock() is disallowed if the CPU is either idle or offline from an RCU perspective, so check for those as well.
might we be in RCU-bh read-side critical section?
Parameters
Description
Check for bottom half being disabled, which covers both the CONFIG_PROVE_RCU and not cases. Note that if someone uses rcu_read_lock_bh(), but then later enables BH, lockdep (if enabled) will show the situation. This is useful for debug checks in functions that require that they be called within an RCU read-side critical section.
Check debug_lockdep_rcu_enabled() to prevent false positives during boot.
Note that rcu_read_lock() is disallowed if the CPU is either idle or offline from an RCU perspective, so check for those as well.
Callback function to awaken a task after grace period
Parameters
Description
Awaken the corresponding task now that a grace period has elapsed.
initialize on-stack rcu_head for debugobjects
Parameters
Description
This function informs debugobjects of a new rcu_head structure that has been allocated as an auto variable on the stack. This function is not required for rcu_head structures that are statically defined or that are dynamically allocated on the heap. This function has no effect for !CONFIG_DEBUG_OBJECTS_RCU_HEAD kernel builds.
destroy on-stack rcu_head for debugobjects
Parameters
Description
This function informs debugobjects that an on-stack rcu_head structure is about to go out of scope. As with init_rcu_head_on_stack(), this function is not required for rcu_head structures that are statically defined or that are dynamically allocated on the heap. Also as with init_rcu_head_on_stack(), this function has no effect for !CONFIG_DEBUG_OBJECTS_RCU_HEAD kernel builds.
wait until an rcu-tasks grace period has elapsed.
Parameters
Description
Control will return to the caller some time after a full rcu-tasks grace period has elapsed, in other words after all currently executing rcu-tasks read-side critical sections have elapsed. These read-side critical sections are delimited by calls to schedule(), cond_resched_rcu_qs(), idle execution, userspace execution, calls to synchronize_rcu_tasks(), and (in theory, anyway) cond_resched().
This is a very specialized primitive, intended only for a few uses in tracing and other situations requiring manipulation of function preambles and profiling hooks. The synchronize_rcu_tasks() function is not (yet) intended for heavy use from multiple CPUs.
Note that this guarantee implies further memory-ordering guarantees. On systems with more than one CPU, when synchronize_rcu_tasks() returns, each CPU is guaranteed to have executed a full memory barrier since the end of its last RCU-tasks read-side critical section whose beginning preceded the call to synchronize_rcu_tasks(). In addition, each CPU having an RCU-tasks read-side critical section that extends beyond the return from synchronize_rcu_tasks() is guaranteed to have executed a full memory barrier after the beginning of synchronize_rcu_tasks() and before the beginning of that RCU-tasks read-side critical section. Note that these guarantees include CPUs that are offline, idle, or executing in user mode, as well as CPUs that are executing in the kernel.
Furthermore, if CPU A invoked synchronize_rcu_tasks(), which returned to its caller on CPU B, then both CPU A and CPU B are guaranteed to have executed a full memory barrier during the execution of synchronize_rcu_tasks() – even if CPU A and CPU B are the same CPU (but again only if the system has more than one CPU).
Wait for in-flight call_rcu_tasks() callbacks.
Parameters
Description
Although the current implementation is guaranteed to wait, it is not obligated to, for example, if there are no pending callbacks.
Allocate device resource data
Parameters
Description
Allocate devres of size bytes. The allocated area is zeroed, then associated with release. The returned pointer can be passed to other devres_*() functions.
Return
Pointer to allocated devres on success, NULL on failure.
Resource iterator
Parameters
Description
Call fn for each devres of dev which is associated with release and for which match returns 1.
Return
void
Free device resource data
Parameters
Description
Free devres created with devres_alloc().
Parameters
Description
Register devres res to dev. res should have been allocated using devres_alloc(). On driver detach, the associated release function will be invoked and devres will be freed automatically.
Find device resource
Parameters
Description
Find the latest devres of dev which is associated with release and for which match returns 1. If match is NULL, it’s considered to match all.
Return
Pointer to found devres, NULL if not found.
Find devres, if non-existent, add one atomically
Parameters
Description
Find the latest devres of dev which has the same release function as new_res and for which match return 1. If found, new_res is freed; otherwise, new_res is added atomically.
Return
Pointer to found or added devres.
Find a device resource and remove it
Parameters
Description
Find the latest devres of dev associated with release and for which match returns 1. If match is NULL, it’s considered to match all. If found, the resource is removed atomically and returned.
Return
Pointer to removed devres on success, NULL if not found.
Find a device resource and destroy it
Parameters
Description
Find the latest devres of dev associated with release and for which match returns 1. If match is NULL, it’s considered to match all. If found, the resource is removed atomically and freed.
Note that the release function for the resource will not be called, only the devres-allocated data will be freed. The caller becomes responsible for freeing any other data.
Return
0 if devres is found and freed, -ENOENT if not found.
Find a device resource and destroy it, calling release
Parameters
Description
Find the latest devres of dev associated with release and for which match returns 1. If match is NULL, it’s considered to match all. If found, the resource is removed atomically, the release function called and the resource freed.
Return
0 if devres is found and freed, -ENOENT if not found.
Parameters
Description
Open a new devres group for dev with id. For id, using a pointer to an object which won’t be used for another group is recommended. If id is NULL, address-wise unique ID is created.
Return
ID of the new group, NULL on failure.
Parameters
Description
Close the group identified by id. If id is NULL, the latest open group is selected.
Parameters
Description
Remove the group identified by id. If id is NULL, the latest open group is selected. Note that removing a group doesn’t affect any other resources.
Parameters
Description
Release all resources in the group identified by id. If id is NULL, the latest open group is selected. The selected group and groups properly nested inside the selected group are removed.
Return
The number of released non-group resources.
add a custom action to list of managed resources
Parameters
Description
This adds a custom action to the list of managed resources so that it gets executed as part of standard resource unwinding.
removes previously added custom action
Parameters
Description
Removes instance of action previously added by devm_add_action(). Both action and data should match one of the existing entries.
Parameters
Description
Managed kmalloc. Memory allocated with this function is automatically freed on driver detach. Like all other devres resources, guaranteed alignment is unsigned long long.
Return
Pointer to allocated memory on success, NULL on failure.
Allocate resource managed space and copy an existing string into that.
Parameters
Return
Pointer to allocated string on success, NULL on failure.
Allocate resource managed space and format a string into that.
Parameters
Return
Pointer to allocated string on success, NULL on failure.
Allocate resource managed space and format a string into that.
Parameters
Return
Pointer to allocated string on success, NULL on failure.
Parameters
Description
Free memory allocated with devm_kmalloc().
Resource-managed kmemdup
Parameters
Description
Duplicate region of a memory using resource managed kmalloc
Resource-managed __get_free_pages
Parameters
Description
Managed get_free_pages. Memory allocated with this function is automatically freed on driver detach.
Return
Address of allocated memory on success, 0 on failure.
Parameters
Description
Free memory allocated with devm_get_free_pages(). Unlike free_pages, there is no need to supply the order.
Resource-managed alloc_percpu
Parameters
Description
Managed alloc_percpu. Per-cpu memory allocated with this function is automatically freed on driver detach.
Return
Pointer to allocated memory on success, NULL on failure.
Parameters
Description
Free memory allocated with devm_alloc_percpu().