還是神奇的進(jìn)程調(diào)度問題引發(fā)的,參看Linux進(jìn)程組調(diào)度機(jī)制分析,組調(diào)度機(jī)制是看清楚了,發(fā)現(xiàn)在重啟過程中,很多內(nèi)核調(diào)用棧阻塞在了double_rq_lock函數(shù)上,而double_rq_lock則是load_balance觸發(fā)的,懷疑當(dāng)時的核間調(diào)度出現(xiàn)了問題,在某個負(fù)責(zé)場景下產(chǎn)生了多核互鎖,后面看了一下CPU負(fù)載平衡下的代碼實(shí)現(xiàn),寫一下總結(jié)。
內(nèi)核代碼版本:kernel-3.0.13-0.27。
內(nèi)核代碼函數(shù)起自load_balance函數(shù),從load_balance函數(shù)看引用它的函數(shù)可以一直找到schedule函數(shù)這里,便從這里開始往下看,在__schedule中有下面一句話。
1 2 | if (unlikely(!rq->nr_running))
idle_balance(cpu, rq);
|
從上面可以看出什么時候內(nèi)核會嘗試進(jìn)行CPU負(fù)載平衡:即當(dāng)前CPU運(yùn)行隊(duì)列為NULL的時候。
CPU負(fù)載平衡有兩種方式:pull和push,即空閑CPU從其他忙的CPU隊(duì)列中拉一個進(jìn)程到當(dāng)前CPU隊(duì)列;或者忙的CPU隊(duì)列將一個進(jìn)程推送到空閑的CPU隊(duì)列中。idle_balance干的則是pull的事情,具體push下面會提到。
在idle_balance里面,有一個proc閥門控制當(dāng)前CPU是否pull:
1 2 | if (this_rq->avg_idle < sysctl_sched_migration_cost)
return ;
|
sysctl_sched_migration_cost對應(yīng)proc控制文件是/proc/sys/kernel/sched_migration_cost,開關(guān)代表如果CPU隊(duì)列空閑了500ms(sysctl_sched_migration_cost默認(rèn)值)以上,則進(jìn)行pull,否則則返回。
for_each_domain(this_cpu, sd) 則是遍歷當(dāng)前CPU所在的調(diào)度域,可以直觀的理解成一個CPU組,類似task_group,核間平衡指組內(nèi)的平衡。負(fù)載平衡有一個矛盾就是:負(fù)載平衡的頻度和CPU cache的命中率是矛盾的,CPU調(diào)度域就是將各個CPU分成層次不同的組,低層次搞定的平衡就絕不上升到高層次處理,避免影響cache的命中率。
圖例如下;
最終通過load_balance進(jìn)入正題。
首先通過find_busiest_group獲取當(dāng)前調(diào)度域中的最忙的調(diào)度組,首先update_sd_lb_stats更新sd的狀態(tài),也就是遍歷對應(yīng)的sd,將sds里面的結(jié)構(gòu)體數(shù)據(jù)填滿,如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | struct sd_lb_stats {
struct sched_group *busiest; /* Busiest group in this sd */
struct sched_group * this ; /* Local group in this sd */
unsigned long total_load; /* Total load of all groups in sd */
unsigned long total_pwr; /* Total power of all groups in sd */
unsigned long avg_load; /* Average load across all groups in sd */
/** Statistics of this group */
unsigned long this_load; //當(dāng)前調(diào)度組的負(fù)載
unsigned long this_load_per_task; //當(dāng)前調(diào)度組的平均負(fù)載
unsigned long this_nr_running; //當(dāng)前調(diào)度組內(nèi)運(yùn)行隊(duì)列中進(jìn)程的總數(shù)
unsigned long this_has_capacity;
unsigned int this_idle_cpus;
/* Statistics of the busiest group */
unsigned int busiest_idle_cpus;
unsigned long max_load; //最忙的組的負(fù)載量
unsigned long busiest_load_per_task; //最忙的組中平均每個任務(wù)的負(fù)載量
unsigned long busiest_nr_running; //最忙的組中所有運(yùn)行隊(duì)列中進(jìn)程的個數(shù)
unsigned long busiest_group_capacity;
unsigned long busiest_has_capacity;
unsigned int busiest_group_weight;
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | do
{
local_group = cpumask_test_cpu(this_cpu, sched_group_cpus(sg));
if (local_group) {
//如果是當(dāng)前CPU上的group,則進(jìn)行賦值
sds->this_load = sgs.avg_load;
sds-> this = sg;
sds->this_nr_running = sgs.sum_nr_running;
sds->this_load_per_task = sgs.sum_weighted_load;
sds->this_has_capacity = sgs.group_has_capacity;
sds->this_idle_cpus = sgs.idle_cpus;
} else if (update_sd_pick_busiest(sd, sds, sg, &sgs, this_cpu)) {
//在update_sd_pick_busiest判斷當(dāng)前sgs的是否超過了之前的最大值,如果是
//則將sgs值賦給sds
sds->max_load = sgs.avg_load;
sds->busiest = sg;
sds->busiest_nr_running = sgs.sum_nr_running;
sds->busiest_idle_cpus = sgs.idle_cpus;
sds->busiest_group_capacity = sgs.group_capacity;
sds->busiest_load_per_task = sgs.sum_weighted_load;
sds->busiest_has_capacity = sgs.group_has_capacity;
sds->busiest_group_weight = sgs.group_weight;
sds->group_imb = sgs.group_imb;
}
sg = sg->next;
} while (sg != sd->groups);
|
決定選擇調(diào)度域中最忙的組的參照標(biāo)準(zhǔn)是該組內(nèi)所有 CPU上負(fù)載(load) 的和, 找到組中找到忙的運(yùn)行隊(duì)列的參照標(biāo)準(zhǔn)是該CPU運(yùn)行隊(duì)列的長度, 即負(fù)載,并且 load 值越大就表示越忙。在平衡的過程中,通過比較當(dāng)前隊(duì)列與以前記錄的busiest 的負(fù)載情況,及時更新這些變量,讓 busiest 始終指向域內(nèi)最忙的一組,以便于查找。
調(diào)度域的平均負(fù)載計算
1 2 3 | sds.avg_load = (SCHED_POWER_SCALE * sds.total_load) / sds.total_pwr;
if (sds.this_load >= sds.avg_load)
goto out_balanced;
|
在比較負(fù)載大小的過程中, 當(dāng)發(fā)現(xiàn)當(dāng)前運(yùn)行的CPU所在的組中busiest為空時,或者當(dāng)前正在運(yùn)行的 CPU隊(duì)列就是最忙的時, 或者當(dāng)前 CPU隊(duì)列的負(fù)載不小于本組內(nèi)的平均負(fù)載時,或者不平衡的額度不大時,都會返回 NULL 值,即組組之間不需要進(jìn)行平衡;當(dāng)最忙的組的負(fù)載小于該調(diào)度域的平均負(fù)載時,只需要進(jìn)行小范圍的負(fù)載平衡;當(dāng)要轉(zhuǎn)移的任務(wù)量小于每個進(jìn)程的平均負(fù)載時,如此便拿到了最忙的調(diào)度組。
然后find_busiest_queue中找到最忙的調(diào)度隊(duì)列,遍歷該組中的所有 CPU 隊(duì)列,經(jīng)過依次比較各個隊(duì)列的負(fù)載,找到最忙的那個隊(duì)列。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | for_each_cpu(i, sched_group_cpus(group)) {
/*rq->cpu_power表示所在處理器的計算能力,在函式sched_init初始化時,會把這值設(shè)定為SCHED_LOAD_SCALE (=Nice 0的Load Weight=1024).并可透過函式update_cpu_power (in kernel/sched_fair.c)更新這個值.*/
unsigned long power = power_of(i);
unsigned long capacity = DIV_ROUND_CLOSEST(power,SCHED_POWER_SCALE);
unsigned long wl;
if (!cpumask_test_cpu(i, cpus))
continue ;
rq = cpu_rq(i);
/*獲取隊(duì)列負(fù)載cpu_rq(cpu)->load.weight;*/
wl = weighted_cpuload(i);
/*
* When comparing with imbalance, use weighted_cpuload()
* which is not scaled with the cpu power.
*/
if (capacity && rq->nr_running == 1 && wl > imbalance)
continue ;
/*
* For the load comparisons with the other cpu's, consider
* the weighted_cpuload() scaled with the cpu power, so that
* the load can be moved away from the cpu that is potentially
* running at a lower capacity.
*/
wl = (wl * SCHED_POWER_SCALE) / power;
if (wl > max_load) {
max_load = wl;
busiest = rq;
}
|
通過上面的計算,便拿到了最忙隊(duì)列。
當(dāng)busiest->nr_running運(yùn)行數(shù)大于1的時候,進(jìn)行pull操作,pull前對move_tasks,先進(jìn)行double_rq_lock加鎖處理。
1 2 3 4 | double_rq_lock(this_rq, busiest);
ld_moved = move_tasks(this_rq, this_cpu, busiest,
imbalance, sd, idle, &all_pinned);
double_rq_unlock(this_rq, busiest);
|
move_tasks進(jìn)程pull task是允許失敗的,即move_tasks->balance_tasks,在此處,有sysctl_sched_nr_migrate開關(guān)控制進(jìn)程遷移個數(shù),對應(yīng)proc的是/proc/sys/kernel/sched_nr_migrate。
下面有can_migrate_task函數(shù)檢查選定的進(jìn)程是否可以進(jìn)行遷移,遷移失敗的原因有3個,1.遷移的進(jìn)程處于運(yùn)行狀態(tài);2.進(jìn)程被綁核了,不能遷移到目標(biāo)CPU上;3.進(jìn)程的cache仍然是hot,此處也是為了保證cache命中率。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | /*關(guān)于cache cold的情況下,如果遷移失敗的個數(shù)太多,仍然進(jìn)行遷移
* Aggressive migration if:
* 1) task is cache cold, or
* 2) too many balance attempts have failed.
*/
tsk_cache_hot = task_hot(p, rq->clock_task, sd);
if (!tsk_cache_hot ||
sd->nr_balance_failed > sd->cache_nice_tries) {
#ifdef CONFIG_SCHEDSTATS
if (tsk_cache_hot) {
schedstat_inc(sd, lb_hot_gained[idle]);
schedstat_inc(p, se.statistics.nr_forced_migrations);
}
#endif
return 1;
}
|
判斷進(jìn)程cache是否有效,判斷條件,進(jìn)程的運(yùn)行的時間大于proc控制開關(guān)sysctl_sched_migration_cost,對應(yīng)目錄/proc/sys/kernel/sched_migration_cost_ns
1 2 3 4 5 6 7 | static int
task_hot( struct task_struct *p, u64 now, struct sched_domain *sd)
{
s64 delta;
delta = now - p->se.exec_start;
return delta < (s64)sysctl_sched_migration_cost;
}
|
在load_balance中,move_tasks返回失敗也就是ld_moved==0,其中sd->nr_balance_failed++對應(yīng)can_migrate_task中的"too many balance attempts have failed",然后busiest->active_balance = 1設(shè)置,active_balance = 1。
1 2 3 4 5 | if (active_balance)
//如果pull失敗了,開始觸發(fā)push操作
stop_one_cpu_nowait(cpu_of(busiest),
active_load_balance_cpu_stop, busiest,
&busiest->active_balance_work);
|
push整個觸發(fā)操作代碼機(jī)制比較繞,stop_one_cpu_nowait把a(bǔ)ctive_load_balance_cpu_stop添加到cpu_stopper每CPU變量的任務(wù)隊(duì)列里面,如下:
1 2 3 4 5 6 | void stop_one_cpu_nowait(unsigned int cpu, cpu_stop_fn_t fn, void *arg,
struct cpu_stop_work *work_buf)
{
*work_buf = ( struct cpu_stop_work){ .fn = fn, .arg = arg, };
cpu_stop_queue_work(&per_cpu(cpu_stopper, cpu), work_buf);
}
|
而cpu_stopper則是cpu_stop_init函數(shù)通過cpu_stop_cpu_callback創(chuàng)建的migration內(nèi)核線程,觸發(fā)任務(wù)隊(duì)列調(diào)度。因?yàn)閙igration內(nèi)核線程是綁定每個核心上的,進(jìn)程遷移失敗的1和3問題就可以通過push解決。active_load_balance_cpu_stop則調(diào)用move_one_task函數(shù)遷移指定的進(jìn)程。
上面描述的則是整個pull和push的過程,需要補(bǔ)充的pull觸發(fā)除了schedule后觸發(fā),還有scheduler_tick通過觸發(fā)中斷,調(diào)用run_rebalance_domains再調(diào)用rebalance_domains觸發(fā),不再細(xì)數(shù)。
1 2 3 4 | void __init sched_init( void )
{
open_softirq(SCHED_SOFTIRQ, run_rebalance_domains);
}
|
|