Most applications need to use timers to do things every few seconds. They are needed for maintenance work, network keep alive or other reasons. In most cases, they don’t need to be very exact.

I’m wondering why they are not synchronized on the OS-level: All applications do their maintenance work directly after each other and then the CPU is allowed to go to a state with lower power consumption. The current case is probably that it can’t even go low-power because all applications have their timers fired at arbitrary times.

Does anyone know about such possibilities in the major operation systems?

Are you interested in reading more from CodingClues?
Then subscribe to new postings via RSS or via E-Mail.

Viewing 1 Comment

    • ^
    • v

    Afaik, You're right that there is no time for sleep because of too many interrupts.
    The problem is: How to synchronize interrupts? Most interrupts fire at arbitrary times, like "new data from the disk" or "new data on the network interface", some are just maintenance work of some application... if you synchronize all of them to specific intervals in time, the CPU power consumption would indeed decrease, but the CPU speed would decrease at a very rapid rate.

    I'm no expert at this, you might want to check what the blogosphere has said around Intel's "ntop" tool for Linux. ntop can show what apps/devices wake up the CPU, so it can be optimized.

close Reblog this comment
blog comments powered by Disqus