highres.txt 12 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249
  1. High resolution timers and dynamic ticks design notes
  2. -----------------------------------------------------
  3. Further information can be found in the paper of the OLS 2006 talk "hrtimers
  4. and beyond". The paper is part of the OLS 2006 Proceedings Volume 1, which can
  5. be found on the OLS website:
  6. http://www.linuxsymposium.org/2006/linuxsymposium_procv1.pdf
  7. The slides to this talk are available from:
  8. http://tglx.de/projects/hrtimers/ols2006-hrtimers.pdf
  9. The slides contain five figures (pages 2, 15, 18, 20, 22), which illustrate the
  10. changes in the time(r) related Linux subsystems. Figure #1 (p. 2) shows the
  11. design of the Linux time(r) system before hrtimers and other building blocks
  12. got merged into mainline.
  13. Note: the paper and the slides are talking about "clock event source", while we
  14. switched to the name "clock event devices" in meantime.
  15. The design contains the following basic building blocks:
  16. - hrtimer base infrastructure
  17. - timeofday and clock source management
  18. - clock event management
  19. - high resolution timer functionality
  20. - dynamic ticks
  21. hrtimer base infrastructure
  22. ---------------------------
  23. The hrtimer base infrastructure was merged into the 2.6.16 kernel. Details of
  24. the base implementation are covered in Documentation/timers/hrtimers.txt. See
  25. also figure #2 (OLS slides p. 15)
  26. The main differences to the timer wheel, which holds the armed timer_list type
  27. timers are:
  28. - time ordered enqueueing into a rb-tree
  29. - independent of ticks (the processing is based on nanoseconds)
  30. timeofday and clock source management
  31. -------------------------------------
  32. John Stultz's Generic Time Of Day (GTOD) framework moves a large portion of
  33. code out of the architecture-specific areas into a generic management
  34. framework, as illustrated in figure #3 (OLS slides p. 18). The architecture
  35. specific portion is reduced to the low level hardware details of the clock
  36. sources, which are registered in the framework and selected on a quality based
  37. decision. The low level code provides hardware setup and readout routines and
  38. initializes data structures, which are used by the generic time keeping code to
  39. convert the clock ticks to nanosecond based time values. All other time keeping
  40. related functionality is moved into the generic code. The GTOD base patch got
  41. merged into the 2.6.18 kernel.
  42. Further information about the Generic Time Of Day framework is available in the
  43. OLS 2005 Proceedings Volume 1:
  44. http://www.linuxsymposium.org/2005/linuxsymposium_procv1.pdf
  45. The paper "We Are Not Getting Any Younger: A New Approach to Time and
  46. Timers" was written by J. Stultz, D.V. Hart, & N. Aravamudan.
  47. Figure #3 (OLS slides p.18) illustrates the transformation.
  48. clock event management
  49. ----------------------
  50. While clock sources provide read access to the monotonically increasing time
  51. value, clock event devices are used to schedule the next event
  52. interrupt(s). The next event is currently defined to be periodic, with its
  53. period defined at compile time. The setup and selection of the event device
  54. for various event driven functionalities is hardwired into the architecture
  55. dependent code. This results in duplicated code across all architectures and
  56. makes it extremely difficult to change the configuration of the system to use
  57. event interrupt devices other than those already built into the
  58. architecture. Another implication of the current design is that it is necessary
  59. to touch all the architecture-specific implementations in order to provide new
  60. functionality like high resolution timers or dynamic ticks.
  61. The clock events subsystem tries to address this problem by providing a generic
  62. solution to manage clock event devices and their usage for the various clock
  63. event driven kernel functionalities. The goal of the clock event subsystem is
  64. to minimize the clock event related architecture dependent code to the pure
  65. hardware related handling and to allow easy addition and utilization of new
  66. clock event devices. It also minimizes the duplicated code across the
  67. architectures as it provides generic functionality down to the interrupt
  68. service handler, which is almost inherently hardware dependent.
  69. Clock event devices are registered either by the architecture dependent boot
  70. code or at module insertion time. Each clock event device fills a data
  71. structure with clock-specific property parameters and callback functions. The
  72. clock event management decides, by using the specified property parameters, the
  73. set of system functions a clock event device will be used to support. This
  74. includes the distinction of per-CPU and per-system global event devices.
  75. System-level global event devices are used for the Linux periodic tick. Per-CPU
  76. event devices are used to provide local CPU functionality such as process
  77. accounting, profiling, and high resolution timers.
  78. The management layer assigns one or more of the following functions to a clock
  79. event device:
  80. - system global periodic tick (jiffies update)
  81. - cpu local update_process_times
  82. - cpu local profiling
  83. - cpu local next event interrupt (non periodic mode)
  84. The clock event device delegates the selection of those timer interrupt related
  85. functions completely to the management layer. The clock management layer stores
  86. a function pointer in the device description structure, which has to be called
  87. from the hardware level handler. This removes a lot of duplicated code from the
  88. architecture specific timer interrupt handlers and hands the control over the
  89. clock event devices and the assignment of timer interrupt related functionality
  90. to the core code.
  91. The clock event layer API is rather small. Aside from the clock event device
  92. registration interface it provides functions to schedule the next event
  93. interrupt, clock event device notification service and support for suspend and
  94. resume.
  95. The framework adds about 700 lines of code which results in a 2KB increase of
  96. the kernel binary size. The conversion of i386 removes about 100 lines of
  97. code. The binary size decrease is in the range of 400 byte. We believe that the
  98. increase of flexibility and the avoidance of duplicated code across
  99. architectures justifies the slight increase of the binary size.
  100. The conversion of an architecture has no functional impact, but allows to
  101. utilize the high resolution and dynamic tick functionalities without any change
  102. to the clock event device and timer interrupt code. After the conversion the
  103. enabling of high resolution timers and dynamic ticks is simply provided by
  104. adding the kernel/time/Kconfig file to the architecture specific Kconfig and
  105. adding the dynamic tick specific calls to the idle routine (a total of 3 lines
  106. added to the idle function and the Kconfig file)
  107. Figure #4 (OLS slides p.20) illustrates the transformation.
  108. high resolution timer functionality
  109. -----------------------------------
  110. During system boot it is not possible to use the high resolution timer
  111. functionality, while making it possible would be difficult and would serve no
  112. useful function. The initialization of the clock event device framework, the
  113. clock source framework (GTOD) and hrtimers itself has to be done and
  114. appropriate clock sources and clock event devices have to be registered before
  115. the high resolution functionality can work. Up to the point where hrtimers are
  116. initialized, the system works in the usual low resolution periodic mode. The
  117. clock source and the clock event device layers provide notification functions
  118. which inform hrtimers about availability of new hardware. hrtimers validates
  119. the usability of the registered clock sources and clock event devices before
  120. switching to high resolution mode. This ensures also that a kernel which is
  121. configured for high resolution timers can run on a system which lacks the
  122. necessary hardware support.
  123. The high resolution timer code does not support SMP machines which have only
  124. global clock event devices. The support of such hardware would involve IPI
  125. calls when an interrupt happens. The overhead would be much larger than the
  126. benefit. This is the reason why we currently disable high resolution and
  127. dynamic ticks on i386 SMP systems which stop the local APIC in C3 power
  128. state. A workaround is available as an idea, but the problem has not been
  129. tackled yet.
  130. The time ordered insertion of timers provides all the infrastructure to decide
  131. whether the event device has to be reprogrammed when a timer is added. The
  132. decision is made per timer base and synchronized across per-cpu timer bases in
  133. a support function. The design allows the system to utilize separate per-CPU
  134. clock event devices for the per-CPU timer bases, but currently only one
  135. reprogrammable clock event device per-CPU is utilized.
  136. When the timer interrupt happens, the next event interrupt handler is called
  137. from the clock event distribution code and moves expired timers from the
  138. red-black tree to a separate double linked list and invokes the softirq
  139. handler. An additional mode field in the hrtimer structure allows the system to
  140. execute callback functions directly from the next event interrupt handler. This
  141. is restricted to code which can safely be executed in the hard interrupt
  142. context. This applies, for example, to the common case of a wakeup function as
  143. used by nanosleep. The advantage of executing the handler in the interrupt
  144. context is the avoidance of up to two context switches - from the interrupted
  145. context to the softirq and to the task which is woken up by the expired
  146. timer.
  147. Once a system has switched to high resolution mode, the periodic tick is
  148. switched off. This disables the per system global periodic clock event device -
  149. e.g. the PIT on i386 SMP systems.
  150. The periodic tick functionality is provided by an per-cpu hrtimer. The callback
  151. function is executed in the next event interrupt context and updates jiffies
  152. and calls update_process_times and profiling. The implementation of the hrtimer
  153. based periodic tick is designed to be extended with dynamic tick functionality.
  154. This allows to use a single clock event device to schedule high resolution
  155. timer and periodic events (jiffies tick, profiling, process accounting) on UP
  156. systems. This has been proved to work with the PIT on i386 and the Incrementer
  157. on PPC.
  158. The softirq for running the hrtimer queues and executing the callbacks has been
  159. separated from the tick bound timer softirq to allow accurate delivery of high
  160. resolution timer signals which are used by itimer and POSIX interval
  161. timers. The execution of this softirq can still be delayed by other softirqs,
  162. but the overall latencies have been significantly improved by this separation.
  163. Figure #5 (OLS slides p.22) illustrates the transformation.
  164. dynamic ticks
  165. -------------
  166. Dynamic ticks are the logical consequence of the hrtimer based periodic tick
  167. replacement (sched_tick). The functionality of the sched_tick hrtimer is
  168. extended by three functions:
  169. - hrtimer_stop_sched_tick
  170. - hrtimer_restart_sched_tick
  171. - hrtimer_update_jiffies
  172. hrtimer_stop_sched_tick() is called when a CPU goes into idle state. The code
  173. evaluates the next scheduled timer event (from both hrtimers and the timer
  174. wheel) and in case that the next event is further away than the next tick it
  175. reprograms the sched_tick to this future event, to allow longer idle sleeps
  176. without worthless interruption by the periodic tick. The function is also
  177. called when an interrupt happens during the idle period, which does not cause a
  178. reschedule. The call is necessary as the interrupt handler might have armed a
  179. new timer whose expiry time is before the time which was identified as the
  180. nearest event in the previous call to hrtimer_stop_sched_tick.
  181. hrtimer_restart_sched_tick() is called when the CPU leaves the idle state before
  182. it calls schedule(). hrtimer_restart_sched_tick() resumes the periodic tick,
  183. which is kept active until the next call to hrtimer_stop_sched_tick().
  184. hrtimer_update_jiffies() is called from irq_enter() when an interrupt happens
  185. in the idle period to make sure that jiffies are up to date and the interrupt
  186. handler has not to deal with an eventually stale jiffy value.
  187. The dynamic tick feature provides statistical values which are exported to
  188. userspace via /proc/stats and can be made available for enhanced power
  189. management control.
  190. The implementation leaves room for further development like full tickless
  191. systems, where the time slice is controlled by the scheduler, variable
  192. frequency profiling, and a complete removal of jiffies in the future.
  193. Aside the current initial submission of i386 support, the patchset has been
  194. extended to x86_64 and ARM already. Initial (work in progress) support is also
  195. available for MIPS and PowerPC.
  196. Thomas, Ingo