Fix how CPU load counts are adjusted so that the total always adds up to 100%

This commit is contained in:
Gregory Nutt 2014-02-27 11:16:15 -06:00
parent 7138e18efe
commit cb0d49047a
2 changed files with 12 additions and 4 deletions

View File

@ -6673,4 +6673,7 @@
* arch/arm/src/sam34: The port to the SAM4E is code complete (2014-2-16).
* include/cxx: Fix some bad idempotence defintions in header files
(2014-2-27).
* sched/sched_cpuload.c: Change calulation of the total count when the
time constant related delay elapsed. The total count is now always
guaranteed to add up to 100% (excepting only truncation errors)
(2014-2-27).

View File

@ -121,16 +121,21 @@ void weak_function sched_process_cpuload(void)
if (++g_cpuload_total > (CONFIG_SCHED_CPULOAD_TIMECONSTANT * CLOCKS_PER_SEC))
{
/* Divide the tick count for every task by two */
uint32_t total = 0;
/* Divide the tick count for every task by two and recalculate the
* total.
*/
for (i = 0; i < CONFIG_MAX_TASKS; i++)
{
g_pidhash[i].ticks >>= 1;
total += g_pidhash[i].ticks;
}
/* Divide the total tick count by two */
/* Save the new total. */
g_cpuload_total >>= 1;
g_cpuload_total = total;
}
}